2025-09-18 09:53:20.883137 | Job console starting 2025-09-18 09:53:20.897079 | Updating git repos 2025-09-18 09:53:20.968633 | Cloning repos into workspace 2025-09-18 09:53:21.168727 | Restoring repo states 2025-09-18 09:53:21.193393 | Merging changes 2025-09-18 09:53:21.193420 | Checking out repos 2025-09-18 09:53:21.586862 | Preparing playbooks 2025-09-18 09:53:22.223090 | Running Ansible setup 2025-09-18 09:53:26.462225 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-09-18 09:53:27.196601 | 2025-09-18 09:53:27.196767 | PLAY [Base pre] 2025-09-18 09:53:27.213517 | 2025-09-18 09:53:27.213643 | TASK [Setup log path fact] 2025-09-18 09:53:27.243402 | orchestrator | ok 2025-09-18 09:53:27.260981 | 2025-09-18 09:53:27.261118 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-18 09:53:27.307559 | orchestrator | ok 2025-09-18 09:53:27.324864 | 2025-09-18 09:53:27.324999 | TASK [emit-job-header : Print job information] 2025-09-18 09:53:27.372085 | # Job Information 2025-09-18 09:53:27.372388 | Ansible Version: 2.16.14 2025-09-18 09:53:27.372460 | Job: testbed-deploy-in-a-nutshell-ubuntu-24.04 2025-09-18 09:53:27.372551 | Pipeline: post 2025-09-18 09:53:27.372601 | Executor: 521e9411259a 2025-09-18 09:53:27.372643 | Triggered by: https://github.com/osism/testbed/commit/010a250178eeb37888d0f0000e82cb24fd457511 2025-09-18 09:53:27.372689 | Event ID: 4ae6c468-9475-11f0-973e-dcea454e0408 2025-09-18 09:53:27.383033 | 2025-09-18 09:53:27.383173 | LOOP [emit-job-header : Print node information] 2025-09-18 09:53:27.511424 | orchestrator | ok: 2025-09-18 09:53:27.511739 | orchestrator | # Node Information 2025-09-18 09:53:27.511807 | orchestrator | Inventory Hostname: orchestrator 2025-09-18 09:53:27.511858 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-09-18 09:53:27.511903 | orchestrator | Username: zuul-testbed05 2025-09-18 09:53:27.511946 | orchestrator | Distro: Debian 12.12 2025-09-18 09:53:27.511994 | orchestrator | Provider: static-testbed 2025-09-18 09:53:27.512114 | orchestrator | Region: 2025-09-18 09:53:27.512159 | orchestrator | Label: testbed-orchestrator 2025-09-18 09:53:27.512192 | orchestrator | Product Name: OpenStack Nova 2025-09-18 09:53:27.512224 | orchestrator | Interface IP: 81.163.193.140 2025-09-18 09:53:27.536772 | 2025-09-18 09:53:27.536913 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-09-18 09:53:28.006507 | orchestrator -> localhost | changed 2025-09-18 09:53:28.018190 | 2025-09-18 09:53:28.018336 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-09-18 09:53:29.064140 | orchestrator -> localhost | changed 2025-09-18 09:53:29.078586 | 2025-09-18 09:53:29.078710 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-09-18 09:53:29.356700 | orchestrator -> localhost | ok 2025-09-18 09:53:29.370579 | 2025-09-18 09:53:29.370734 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-09-18 09:53:29.393087 | orchestrator | ok 2025-09-18 09:53:29.410705 | orchestrator | included: /var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-09-18 09:53:29.419098 | 2025-09-18 09:53:29.419199 | TASK [add-build-sshkey : Create Temp SSH key] 2025-09-18 09:53:31.441219 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-09-18 09:53:31.441633 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/4abb468dbef14e5b8b9021c6a1c4ab57_id_rsa 2025-09-18 09:53:31.441716 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/4abb468dbef14e5b8b9021c6a1c4ab57_id_rsa.pub 2025-09-18 09:53:31.441773 | orchestrator -> localhost | The key fingerprint is: 2025-09-18 09:53:31.441823 | orchestrator -> localhost | SHA256:e3gRMCD3eEuiBrYBKBhmZwzuIaYkR971iFramnzzMsU zuul-build-sshkey 2025-09-18 09:53:31.441869 | orchestrator -> localhost | The key's randomart image is: 2025-09-18 09:53:31.441936 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-09-18 09:53:31.441984 | orchestrator -> localhost | |==+o. +.o | 2025-09-18 09:53:31.442031 | orchestrator -> localhost | |B+oo = = o | 2025-09-18 09:53:31.442075 | orchestrator -> localhost | |+=* + + = . | 2025-09-18 09:53:31.442118 | orchestrator -> localhost | |Bo.O . + . . | 2025-09-18 09:53:31.442160 | orchestrator -> localhost | |..+ +. S . | 2025-09-18 09:53:31.442215 | orchestrator -> localhost | | . + E o . | 2025-09-18 09:53:31.442259 | orchestrator -> localhost | | + o. o o | 2025-09-18 09:53:31.442300 | orchestrator -> localhost | | .oo o | 2025-09-18 09:53:31.442344 | orchestrator -> localhost | | o. | 2025-09-18 09:53:31.442387 | orchestrator -> localhost | +----[SHA256]-----+ 2025-09-18 09:53:31.442523 | orchestrator -> localhost | ok: Runtime: 0:00:01.558634 2025-09-18 09:53:31.467847 | 2025-09-18 09:53:31.467993 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-09-18 09:53:31.505934 | orchestrator | ok 2025-09-18 09:53:31.519767 | orchestrator | included: /var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-09-18 09:53:31.528944 | 2025-09-18 09:53:31.529050 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-09-18 09:53:31.553026 | orchestrator | skipping: Conditional result was False 2025-09-18 09:53:31.562098 | 2025-09-18 09:53:31.562208 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-09-18 09:53:32.138038 | orchestrator | changed 2025-09-18 09:53:32.145088 | 2025-09-18 09:53:32.145202 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-09-18 09:53:32.417848 | orchestrator | ok 2025-09-18 09:53:32.426821 | 2025-09-18 09:53:32.426975 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-09-18 09:53:32.852099 | orchestrator | ok 2025-09-18 09:53:32.861417 | 2025-09-18 09:53:32.861563 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-09-18 09:53:33.266676 | orchestrator | ok 2025-09-18 09:53:33.272886 | 2025-09-18 09:53:33.272993 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-09-18 09:53:33.297210 | orchestrator | skipping: Conditional result was False 2025-09-18 09:53:33.312856 | 2025-09-18 09:53:33.313014 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-09-18 09:53:33.762713 | orchestrator -> localhost | changed 2025-09-18 09:53:33.776523 | 2025-09-18 09:53:33.776643 | TASK [add-build-sshkey : Add back temp key] 2025-09-18 09:53:34.135873 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/4abb468dbef14e5b8b9021c6a1c4ab57_id_rsa (zuul-build-sshkey) 2025-09-18 09:53:34.136392 | orchestrator -> localhost | ok: Runtime: 0:00:00.019373 2025-09-18 09:53:34.149380 | 2025-09-18 09:53:34.149574 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-09-18 09:53:34.549327 | orchestrator | ok 2025-09-18 09:53:34.560532 | 2025-09-18 09:53:34.560689 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-09-18 09:53:34.586701 | orchestrator | skipping: Conditional result was False 2025-09-18 09:53:34.651978 | 2025-09-18 09:53:34.652107 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-09-18 09:53:35.051639 | orchestrator | ok 2025-09-18 09:53:35.065290 | 2025-09-18 09:53:35.065410 | TASK [validate-host : Define zuul_info_dir fact] 2025-09-18 09:53:35.107457 | orchestrator | ok 2025-09-18 09:53:35.116069 | 2025-09-18 09:53:35.116172 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-09-18 09:53:35.423856 | orchestrator -> localhost | ok 2025-09-18 09:53:35.439929 | 2025-09-18 09:53:35.440078 | TASK [validate-host : Collect information about the host] 2025-09-18 09:53:36.648141 | orchestrator | ok 2025-09-18 09:53:36.661825 | 2025-09-18 09:53:36.661950 | TASK [validate-host : Sanitize hostname] 2025-09-18 09:53:36.720626 | orchestrator | ok 2025-09-18 09:53:36.726071 | 2025-09-18 09:53:36.726178 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-09-18 09:53:37.295717 | orchestrator -> localhost | changed 2025-09-18 09:53:37.302415 | 2025-09-18 09:53:37.302550 | TASK [validate-host : Collect information about zuul worker] 2025-09-18 09:53:37.722601 | orchestrator | ok 2025-09-18 09:53:37.728095 | 2025-09-18 09:53:37.728221 | TASK [validate-host : Write out all zuul information for each host] 2025-09-18 09:53:38.247757 | orchestrator -> localhost | changed 2025-09-18 09:53:38.261867 | 2025-09-18 09:53:38.262011 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-09-18 09:53:38.539327 | orchestrator | ok 2025-09-18 09:53:38.546902 | 2025-09-18 09:53:38.547023 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-09-18 09:54:24.100041 | orchestrator | changed: 2025-09-18 09:54:24.100303 | orchestrator | .d..t...... src/ 2025-09-18 09:54:24.100346 | orchestrator | .d..t...... src/github.com/ 2025-09-18 09:54:24.100376 | orchestrator | .d..t...... src/github.com/osism/ 2025-09-18 09:54:24.100402 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-09-18 09:54:24.100428 | orchestrator | RedHat.yml 2025-09-18 09:54:24.114518 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-09-18 09:54:24.114536 | orchestrator | RedHat.yml 2025-09-18 09:54:24.114589 | orchestrator | = 2.2.0"... 2025-09-18 09:54:34.561984 | orchestrator | 09:54:34.561 STDOUT terraform: - Finding latest version of hashicorp/null... 2025-09-18 09:54:34.593551 | orchestrator | 09:54:34.593 STDOUT terraform: - Finding terraform-provider-openstack/openstack versions matching ">= 1.53.0"... 2025-09-18 09:54:35.086713 | orchestrator | 09:54:35.086 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-09-18 09:54:35.789989 | orchestrator | 09:54:35.789 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-09-18 09:54:35.864118 | orchestrator | 09:54:35.863 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-09-18 09:54:36.477811 | orchestrator | 09:54:36.477 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-09-18 09:54:36.552452 | orchestrator | 09:54:36.552 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.3.2... 2025-09-18 09:54:37.222795 | orchestrator | 09:54:37.222 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.3.2 (signed, key ID 4F80527A391BEFD2) 2025-09-18 09:54:37.222855 | orchestrator | 09:54:37.222 STDOUT terraform: Providers are signed by their developers. 2025-09-18 09:54:37.222891 | orchestrator | 09:54:37.222 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-09-18 09:54:37.222949 | orchestrator | 09:54:37.222 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-09-18 09:54:37.223112 | orchestrator | 09:54:37.222 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-09-18 09:54:37.223380 | orchestrator | 09:54:37.223 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-09-18 09:54:37.223390 | orchestrator | 09:54:37.223 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-09-18 09:54:37.223395 | orchestrator | 09:54:37.223 STDOUT terraform: you run "tofu init" in the future. 2025-09-18 09:54:37.223403 | orchestrator | 09:54:37.223 STDOUT terraform: OpenTofu has been successfully initialized! 2025-09-18 09:54:37.223468 | orchestrator | 09:54:37.223 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-09-18 09:54:37.223555 | orchestrator | 09:54:37.223 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-09-18 09:54:37.223584 | orchestrator | 09:54:37.223 STDOUT terraform: should now work. 2025-09-18 09:54:37.223670 | orchestrator | 09:54:37.223 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-09-18 09:54:37.223760 | orchestrator | 09:54:37.223 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-09-18 09:54:37.223840 | orchestrator | 09:54:37.223 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-09-18 09:54:37.310203 | orchestrator | 09:54:37.308 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-18 09:54:37.314196 | orchestrator | 09:54:37.312 WARN  The `workspace` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- workspace` instead. 2025-09-18 09:54:37.528591 | orchestrator | 09:54:37.525 STDOUT terraform: Created and switched to workspace "ci"! 2025-09-18 09:54:37.528647 | orchestrator | 09:54:37.525 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-09-18 09:54:37.528656 | orchestrator | 09:54:37.525 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-09-18 09:54:37.528661 | orchestrator | 09:54:37.525 STDOUT terraform: for this configuration. 2025-09-18 09:54:37.694212 | orchestrator | 09:54:37.694 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-18 09:54:37.694264 | orchestrator | 09:54:37.694 WARN  The `fmt` command is deprecated and will be removed in a future version of Terragrunt. Use `terragrunt run -- fmt` instead. 2025-09-18 09:54:37.792511 | orchestrator | 09:54:37.792 STDOUT terraform: ci.auto.tfvars 2025-09-18 09:54:38.469160 | orchestrator | 09:54:38.466 STDOUT terraform: default_custom.tf 2025-09-18 09:54:39.564582 | orchestrator | 09:54:39.563 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed05/terraform` instead. 2025-09-18 09:54:40.562616 | orchestrator | 09:54:40.560 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-09-18 09:54:41.118433 | orchestrator | 09:54:41.118 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-09-18 09:54:41.421782 | orchestrator | 09:54:41.421 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-09-18 09:54:41.421854 | orchestrator | 09:54:41.421 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-09-18 09:54:41.421862 | orchestrator | 09:54:41.421 STDOUT terraform:  + create 2025-09-18 09:54:41.421868 | orchestrator | 09:54:41.421 STDOUT terraform:  <= read (data resources) 2025-09-18 09:54:41.421876 | orchestrator | 09:54:41.421 STDOUT terraform: OpenTofu will perform the following actions: 2025-09-18 09:54:41.422067 | orchestrator | 09:54:41.421 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-09-18 09:54:41.422096 | orchestrator | 09:54:41.422 STDOUT terraform:  # (config refers to values not yet known) 2025-09-18 09:54:41.422127 | orchestrator | 09:54:41.422 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-09-18 09:54:41.422155 | orchestrator | 09:54:41.422 STDOUT terraform:  + checksum = (known after apply) 2025-09-18 09:54:41.422222 | orchestrator | 09:54:41.422 STDOUT terraform:  + created_at = (known after apply) 2025-09-18 09:54:41.422229 | orchestrator | 09:54:41.422 STDOUT terraform:  + file = (known after apply) 2025-09-18 09:54:41.422254 | orchestrator | 09:54:41.422 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.422283 | orchestrator | 09:54:41.422 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.422311 | orchestrator | 09:54:41.422 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-18 09:54:41.422340 | orchestrator | 09:54:41.422 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-18 09:54:41.422360 | orchestrator | 09:54:41.422 STDOUT terraform:  + most_recent = true 2025-09-18 09:54:41.422387 | orchestrator | 09:54:41.422 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.422414 | orchestrator | 09:54:41.422 STDOUT terraform:  + protected = (known after apply) 2025-09-18 09:54:41.422444 | orchestrator | 09:54:41.422 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.422474 | orchestrator | 09:54:41.422 STDOUT terraform:  + schema = (known after apply) 2025-09-18 09:54:41.422503 | orchestrator | 09:54:41.422 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-18 09:54:41.422532 | orchestrator | 09:54:41.422 STDOUT terraform:  + tags = (known after apply) 2025-09-18 09:54:41.422561 | orchestrator | 09:54:41.422 STDOUT terraform:  + updated_at = (known after apply) 2025-09-18 09:54:41.422576 | orchestrator | 09:54:41.422 STDOUT terraform:  } 2025-09-18 09:54:41.422843 | orchestrator | 09:54:41.422 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-09-18 09:54:41.422969 | orchestrator | 09:54:41.422 STDOUT terraform:  # (config refers to values not yet known) 2025-09-18 09:54:41.423058 | orchestrator | 09:54:41.422 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-09-18 09:54:41.423097 | orchestrator | 09:54:41.423 STDOUT terraform:  + checksum = (known after apply) 2025-09-18 09:54:41.423225 | orchestrator | 09:54:41.423 STDOUT terraform:  + created_at = (known after apply) 2025-09-18 09:54:41.423273 | orchestrator | 09:54:41.423 STDOUT terraform:  + file = (known after apply) 2025-09-18 09:54:41.423434 | orchestrator | 09:54:41.423 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.423540 | orchestrator | 09:54:41.423 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.423609 | orchestrator | 09:54:41.423 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-09-18 09:54:41.423707 | orchestrator | 09:54:41.423 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-09-18 09:54:41.423763 | orchestrator | 09:54:41.423 STDOUT terraform:  + most_recent = true 2025-09-18 09:54:41.423881 | orchestrator | 09:54:41.423 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.423982 | orchestrator | 09:54:41.423 STDOUT terraform:  + protected = (known after apply) 2025-09-18 09:54:41.424098 | orchestrator | 09:54:41.423 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.424153 | orchestrator | 09:54:41.424 STDOUT terraform:  + schema = (known after apply) 2025-09-18 09:54:41.424258 | orchestrator | 09:54:41.424 STDOUT terraform:  + size_bytes = (known after apply) 2025-09-18 09:54:41.424410 | orchestrator | 09:54:41.424 STDOUT terraform:  + tags = (known after apply) 2025-09-18 09:54:41.424464 | orchestrator | 09:54:41.424 STDOUT terraform:  + updated_at = (known after apply) 2025-09-18 09:54:41.424500 | orchestrator | 09:54:41.424 STDOUT terraform:  } 2025-09-18 09:54:41.424903 | orchestrator | 09:54:41.424 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-09-18 09:54:41.424954 | orchestrator | 09:54:41.424 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-09-18 09:54:41.425090 | orchestrator | 09:54:41.424 STDOUT terraform:  + content = (known after apply) 2025-09-18 09:54:41.425216 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 09:54:41.425373 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 09:54:41.425453 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 09:54:41.425526 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 09:54:41.425621 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 09:54:41.425663 | orchestrator | 09:54:41.425 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 09:54:41.425720 | orchestrator | 09:54:41.425 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 09:54:41.425797 | orchestrator | 09:54:41.425 STDOUT terraform:  + file_permission = "0644" 2025-09-18 09:54:41.425877 | orchestrator | 09:54:41.425 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-09-18 09:54:41.425976 | orchestrator | 09:54:41.425 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.426026 | orchestrator | 09:54:41.425 STDOUT terraform:  } 2025-09-18 09:54:41.426426 | orchestrator | 09:54:41.426 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-09-18 09:54:41.426467 | orchestrator | 09:54:41.426 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-09-18 09:54:41.426614 | orchestrator | 09:54:41.426 STDOUT terraform:  + content = (known after apply) 2025-09-18 09:54:41.426770 | orchestrator | 09:54:41.426 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 09:54:41.426915 | orchestrator | 09:54:41.426 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 09:54:41.426966 | orchestrator | 09:54:41.426 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 09:54:41.427054 | orchestrator | 09:54:41.426 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 09:54:41.427098 | orchestrator | 09:54:41.427 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 09:54:41.427234 | orchestrator | 09:54:41.427 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 09:54:41.427303 | orchestrator | 09:54:41.427 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 09:54:41.427348 | orchestrator | 09:54:41.427 STDOUT terraform:  + file_permission = "0644" 2025-09-18 09:54:41.427433 | orchestrator | 09:54:41.427 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-09-18 09:54:41.427520 | orchestrator | 09:54:41.427 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.427554 | orchestrator | 09:54:41.427 STDOUT terraform:  } 2025-09-18 09:54:41.427915 | orchestrator | 09:54:41.427 STDOUT terraform:  # local_file.inventory will be created 2025-09-18 09:54:41.427930 | orchestrator | 09:54:41.427 STDOUT terraform:  + resource "local_file" "inventory" { 2025-09-18 09:54:41.427960 | orchestrator | 09:54:41.427 STDOUT terraform:  + content = (known after apply) 2025-09-18 09:54:41.427994 | orchestrator | 09:54:41.427 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 09:54:41.428029 | orchestrator | 09:54:41.427 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 09:54:41.428174 | orchestrator | 09:54:41.428 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 09:54:41.428324 | orchestrator | 09:54:41.428 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 09:54:41.428357 | orchestrator | 09:54:41.428 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 09:54:41.428393 | orchestrator | 09:54:41.428 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 09:54:41.428498 | orchestrator | 09:54:41.428 STDOUT terraform:  + directory_permission = "0777" 2025-09-18 09:54:41.428555 | orchestrator | 09:54:41.428 STDOUT terraform:  + file_permission = "0644" 2025-09-18 09:54:41.428601 | orchestrator | 09:54:41.428 STDOUT terraform:  + filename = "inventory.ci" 2025-09-18 09:54:41.428721 | orchestrator | 09:54:41.428 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.428807 | orchestrator | 09:54:41.428 STDOUT terraform:  } 2025-09-18 09:54:41.430340 | orchestrator | 09:54:41.430 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-09-18 09:54:41.430371 | orchestrator | 09:54:41.430 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-09-18 09:54:41.430378 | orchestrator | 09:54:41.430 STDOUT terraform:  + content = (sensitive value) 2025-09-18 09:54:41.430407 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-09-18 09:54:41.430433 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-09-18 09:54:41.430490 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_md5 = (known after apply) 2025-09-18 09:54:41.430498 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_sha1 = (known after apply) 2025-09-18 09:54:41.430534 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_sha256 = (known after apply) 2025-09-18 09:54:41.430574 | orchestrator | 09:54:41.430 STDOUT terraform:  + content_sha512 = (known after apply) 2025-09-18 09:54:41.430599 | orchestrator | 09:54:41.430 STDOUT terraform:  + directory_permission = "0700" 2025-09-18 09:54:41.430623 | orchestrator | 09:54:41.430 STDOUT terraform:  + file_permission = "0600" 2025-09-18 09:54:41.430654 | orchestrator | 09:54:41.430 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-09-18 09:54:41.430750 | orchestrator | 09:54:41.430 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.430868 | orchestrator | 09:54:41.430 STDOUT terraform:  } 2025-09-18 09:54:41.430972 | orchestrator | 09:54:41.430 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-09-18 09:54:41.431324 | orchestrator | 09:54:41.430 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-09-18 09:54:41.431489 | orchestrator | 09:54:41.431 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.431556 | orchestrator | 09:54:41.431 STDOUT terraform:  } 2025-09-18 09:54:41.432172 | orchestrator | 09:54:41.431 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-09-18 09:54:41.432588 | orchestrator | 09:54:41.432 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-09-18 09:54:41.432777 | orchestrator | 09:54:41.432 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.433019 | orchestrator | 09:54:41.432 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.433271 | orchestrator | 09:54:41.432 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.433467 | orchestrator | 09:54:41.433 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.433645 | orchestrator | 09:54:41.433 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.433924 | orchestrator | 09:54:41.433 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-09-18 09:54:41.434222 | orchestrator | 09:54:41.433 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.434232 | orchestrator | 09:54:41.434 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.434278 | orchestrator | 09:54:41.434 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.434479 | orchestrator | 09:54:41.434 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.434551 | orchestrator | 09:54:41.434 STDOUT terraform:  } 2025-09-18 09:54:41.434761 | orchestrator | 09:54:41.434 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-09-18 09:54:41.434813 | orchestrator | 09:54:41.434 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.435067 | orchestrator | 09:54:41.434 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.435271 | orchestrator | 09:54:41.435 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.435481 | orchestrator | 09:54:41.435 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.435788 | orchestrator | 09:54:41.435 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.436080 | orchestrator | 09:54:41.435 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.436540 | orchestrator | 09:54:41.436 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-09-18 09:54:41.436720 | orchestrator | 09:54:41.436 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.436745 | orchestrator | 09:54:41.436 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.436758 | orchestrator | 09:54:41.436 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.436774 | orchestrator | 09:54:41.436 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.436786 | orchestrator | 09:54:41.436 STDOUT terraform:  } 2025-09-18 09:54:41.436830 | orchestrator | 09:54:41.436 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-09-18 09:54:41.436882 | orchestrator | 09:54:41.436 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.436924 | orchestrator | 09:54:41.436 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.436955 | orchestrator | 09:54:41.436 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.436970 | orchestrator | 09:54:41.436 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.436985 | orchestrator | 09:54:41.436 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.437032 | orchestrator | 09:54:41.436 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.437082 | orchestrator | 09:54:41.437 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-09-18 09:54:41.437230 | orchestrator | 09:54:41.437 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.437247 | orchestrator | 09:54:41.437 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.437263 | orchestrator | 09:54:41.437 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.437275 | orchestrator | 09:54:41.437 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.437289 | orchestrator | 09:54:41.437 STDOUT terraform:  } 2025-09-18 09:54:41.437318 | orchestrator | 09:54:41.437 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-09-18 09:54:41.437382 | orchestrator | 09:54:41.437 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.437517 | orchestrator | 09:54:41.437 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.437651 | orchestrator | 09:54:41.437 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.437670 | orchestrator | 09:54:41.437 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.437711 | orchestrator | 09:54:41.437 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.437744 | orchestrator | 09:54:41.437 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.437797 | orchestrator | 09:54:41.437 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-09-18 09:54:41.437816 | orchestrator | 09:54:41.437 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.437847 | orchestrator | 09:54:41.437 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.437862 | orchestrator | 09:54:41.437 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.437877 | orchestrator | 09:54:41.437 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.437891 | orchestrator | 09:54:41.437 STDOUT terraform:  } 2025-09-18 09:54:41.437971 | orchestrator | 09:54:41.437 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-09-18 09:54:41.438237 | orchestrator | 09:54:41.437 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.438275 | orchestrator | 09:54:41.438 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.438291 | orchestrator | 09:54:41.438 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.438335 | orchestrator | 09:54:41.438 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.438527 | orchestrator | 09:54:41.438 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.438663 | orchestrator | 09:54:41.438 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.438692 | orchestrator | 09:54:41.438 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-09-18 09:54:41.438734 | orchestrator | 09:54:41.438 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.438752 | orchestrator | 09:54:41.438 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.438785 | orchestrator | 09:54:41.438 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.438801 | orchestrator | 09:54:41.438 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.438815 | orchestrator | 09:54:41.438 STDOUT terraform:  } 2025-09-18 09:54:41.438875 | orchestrator | 09:54:41.438 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-09-18 09:54:41.438938 | orchestrator | 09:54:41.438 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.439120 | orchestrator | 09:54:41.438 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.439139 | orchestrator | 09:54:41.439 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.439203 | orchestrator | 09:54:41.439 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.439331 | orchestrator | 09:54:41.439 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.439349 | orchestrator | 09:54:41.439 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.439435 | orchestrator | 09:54:41.439 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-09-18 09:54:41.439471 | orchestrator | 09:54:41.439 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.439486 | orchestrator | 09:54:41.439 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.439501 | orchestrator | 09:54:41.439 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.439529 | orchestrator | 09:54:41.439 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.439545 | orchestrator | 09:54:41.439 STDOUT terraform:  } 2025-09-18 09:54:41.439595 | orchestrator | 09:54:41.439 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-09-18 09:54:41.439640 | orchestrator | 09:54:41.439 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-09-18 09:54:41.439674 | orchestrator | 09:54:41.439 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.439689 | orchestrator | 09:54:41.439 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.439733 | orchestrator | 09:54:41.439 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.439784 | orchestrator | 09:54:41.439 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.439817 | orchestrator | 09:54:41.439 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.439876 | orchestrator | 09:54:41.439 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-09-18 09:54:41.440005 | orchestrator | 09:54:41.439 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.440025 | orchestrator | 09:54:41.439 STDOUT terraform:  + size = 80 2025-09-18 09:54:41.440049 | orchestrator | 09:54:41.440 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.440064 | orchestrator | 09:54:41.440 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.440075 | orchestrator | 09:54:41.440 STDOUT terraform:  } 2025-09-18 09:54:41.440119 | orchestrator | 09:54:41.440 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-09-18 09:54:41.440169 | orchestrator | 09:54:41.440 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.440210 | orchestrator | 09:54:41.440 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.440226 | orchestrator | 09:54:41.440 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.440271 | orchestrator | 09:54:41.440 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.440308 | orchestrator | 09:54:41.440 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.440360 | orchestrator | 09:54:41.440 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-09-18 09:54:41.440376 | orchestrator | 09:54:41.440 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.440405 | orchestrator | 09:54:41.440 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.440421 | orchestrator | 09:54:41.440 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.440435 | orchestrator | 09:54:41.440 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.440449 | orchestrator | 09:54:41.440 STDOUT terraform:  } 2025-09-18 09:54:41.446139 | orchestrator | 09:54:41.440 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-09-18 09:54:41.446232 | orchestrator | 09:54:41.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.446243 | orchestrator | 09:54:41.446 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.446253 | orchestrator | 09:54:41.446 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.446262 | orchestrator | 09:54:41.446 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.446274 | orchestrator | 09:54:41.446 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.446307 | orchestrator | 09:54:41.446 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-09-18 09:54:41.446339 | orchestrator | 09:54:41.446 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.446353 | orchestrator | 09:54:41.446 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.446379 | orchestrator | 09:54:41.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.446410 | orchestrator | 09:54:41.446 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.446422 | orchestrator | 09:54:41.446 STDOUT terraform:  } 2025-09-18 09:54:41.446463 | orchestrator | 09:54:41.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-09-18 09:54:41.446506 | orchestrator | 09:54:41.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.446541 | orchestrator | 09:54:41.446 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.446567 | orchestrator | 09:54:41.446 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.446594 | orchestrator | 09:54:41.446 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450113 | orchestrator | 09:54:41.446 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450219 | orchestrator | 09:54:41.446 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-09-18 09:54:41.450232 | orchestrator | 09:54:41.446 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450241 | orchestrator | 09:54:41.446 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450261 | orchestrator | 09:54:41.446 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450271 | orchestrator | 09:54:41.446 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450279 | orchestrator | 09:54:41.446 STDOUT terraform:  } 2025-09-18 09:54:41.450287 | orchestrator | 09:54:41.446 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-09-18 09:54:41.450296 | orchestrator | 09:54:41.446 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450304 | orchestrator | 09:54:41.446 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450312 | orchestrator | 09:54:41.446 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450320 | orchestrator | 09:54:41.446 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450327 | orchestrator | 09:54:41.446 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450335 | orchestrator | 09:54:41.446 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-09-18 09:54:41.450343 | orchestrator | 09:54:41.446 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450350 | orchestrator | 09:54:41.446 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450358 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450365 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450373 | orchestrator | 09:54:41.447 STDOUT terraform:  } 2025-09-18 09:54:41.450381 | orchestrator | 09:54:41.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-09-18 09:54:41.450389 | orchestrator | 09:54:41.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450397 | orchestrator | 09:54:41.447 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450404 | orchestrator | 09:54:41.447 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450412 | orchestrator | 09:54:41.447 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450420 | orchestrator | 09:54:41.447 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450427 | orchestrator | 09:54:41.447 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-09-18 09:54:41.450435 | orchestrator | 09:54:41.447 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450455 | orchestrator | 09:54:41.447 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450464 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450471 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450479 | orchestrator | 09:54:41.447 STDOUT terraform:  } 2025-09-18 09:54:41.450487 | orchestrator | 09:54:41.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-09-18 09:54:41.450495 | orchestrator | 09:54:41.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450502 | orchestrator | 09:54:41.447 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450510 | orchestrator | 09:54:41.447 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450518 | orchestrator | 09:54:41.447 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450525 | orchestrator | 09:54:41.447 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450546 | orchestrator | 09:54:41.447 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-09-18 09:54:41.450555 | orchestrator | 09:54:41.447 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450566 | orchestrator | 09:54:41.447 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450575 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450582 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450590 | orchestrator | 09:54:41.447 STDOUT terraform:  } 2025-09-18 09:54:41.450598 | orchestrator | 09:54:41.447 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-09-18 09:54:41.450606 | orchestrator | 09:54:41.447 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450613 | orchestrator | 09:54:41.447 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450621 | orchestrator | 09:54:41.447 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450629 | orchestrator | 09:54:41.447 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450637 | orchestrator | 09:54:41.447 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450645 | orchestrator | 09:54:41.447 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-09-18 09:54:41.450653 | orchestrator | 09:54:41.447 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450660 | orchestrator | 09:54:41.447 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450668 | orchestrator | 09:54:41.447 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450676 | orchestrator | 09:54:41.448 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450684 | orchestrator | 09:54:41.448 STDOUT terraform:  } 2025-09-18 09:54:41.450691 | orchestrator | 09:54:41.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-09-18 09:54:41.450699 | orchestrator | 09:54:41.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450713 | orchestrator | 09:54:41.448 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450721 | orchestrator | 09:54:41.448 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450728 | orchestrator | 09:54:41.448 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450736 | orchestrator | 09:54:41.448 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450744 | orchestrator | 09:54:41.448 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-09-18 09:54:41.450751 | orchestrator | 09:54:41.448 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450759 | orchestrator | 09:54:41.448 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450767 | orchestrator | 09:54:41.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450775 | orchestrator | 09:54:41.448 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450782 | orchestrator | 09:54:41.448 STDOUT terraform:  } 2025-09-18 09:54:41.450790 | orchestrator | 09:54:41.448 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-09-18 09:54:41.450798 | orchestrator | 09:54:41.448 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-09-18 09:54:41.450806 | orchestrator | 09:54:41.448 STDOUT terraform:  + attachment = (known after apply) 2025-09-18 09:54:41.450813 | orchestrator | 09:54:41.448 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450821 | orchestrator | 09:54:41.448 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.450829 | orchestrator | 09:54:41.448 STDOUT terraform:  + metadata = (known after apply) 2025-09-18 09:54:41.450836 | orchestrator | 09:54:41.448 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-09-18 09:54:41.450855 | orchestrator | 09:54:41.448 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.450863 | orchestrator | 09:54:41.448 STDOUT terraform:  + size = 20 2025-09-18 09:54:41.450871 | orchestrator | 09:54:41.448 STDOUT terraform:  + volume_retype_policy = "never" 2025-09-18 09:54:41.450879 | orchestrator | 09:54:41.448 STDOUT terraform:  + volume_type = "ssd" 2025-09-18 09:54:41.450887 | orchestrator | 09:54:41.448 STDOUT terraform:  } 2025-09-18 09:54:41.450898 | orchestrator | 09:54:41.448 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-09-18 09:54:41.450906 | orchestrator | 09:54:41.448 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-09-18 09:54:41.450914 | orchestrator | 09:54:41.448 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.450922 | orchestrator | 09:54:41.448 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.450930 | orchestrator | 09:54:41.448 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.450937 | orchestrator | 09:54:41.448 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.450945 | orchestrator | 09:54:41.448 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.450958 | orchestrator | 09:54:41.448 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.450965 | orchestrator | 09:54:41.448 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.450973 | orchestrator | 09:54:41.448 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.450981 | orchestrator | 09:54:41.449 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-09-18 09:54:41.450989 | orchestrator | 09:54:41.449 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.450997 | orchestrator | 09:54:41.449 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.451004 | orchestrator | 09:54:41.449 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.451012 | orchestrator | 09:54:41.449 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.451020 | orchestrator | 09:54:41.449 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.451027 | orchestrator | 09:54:41.449 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.451035 | orchestrator | 09:54:41.449 STDOUT terraform:  + name = "testbed-manager" 2025-09-18 09:54:41.451043 | orchestrator | 09:54:41.449 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.451051 | orchestrator | 09:54:41.449 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.451058 | orchestrator | 09:54:41.449 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.451066 | orchestrator | 09:54:41.449 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.451074 | orchestrator | 09:54:41.449 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.451081 | orchestrator | 09:54:41.449 STDOUT terraform:  + user_data = (sensitive value) 2025-09-18 09:54:41.451089 | orchestrator | 09:54:41.449 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.451097 | orchestrator | 09:54:41.449 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.451105 | orchestrator | 09:54:41.449 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.451112 | orchestrator | 09:54:41.449 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.451120 | orchestrator | 09:54:41.449 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.451128 | orchestrator | 09:54:41.449 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.451136 | orchestrator | 09:54:41.449 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.451144 | orchestrator | 09:54:41.449 STDOUT terraform:  } 2025-09-18 09:54:41.451151 | orchestrator | 09:54:41.449 STDOUT terraform:  + network { 2025-09-18 09:54:41.451159 | orchestrator | 09:54:41.449 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.451171 | orchestrator | 09:54:41.449 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.451195 | orchestrator | 09:54:41.449 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.451203 | orchestrator | 09:54:41.449 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.451215 | orchestrator | 09:54:41.449 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.451222 | orchestrator | 09:54:41.449 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.451230 | orchestrator | 09:54:41.449 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.451238 | orchestrator | 09:54:41.449 STDOUT terraform:  } 2025-09-18 09:54:41.451246 | orchestrator | 09:54:41.449 STDOUT terraform:  } 2025-09-18 09:54:41.451254 | orchestrator | 09:54:41.449 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-09-18 09:54:41.451262 | orchestrator | 09:54:41.449 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.451270 | orchestrator | 09:54:41.449 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.451281 | orchestrator | 09:54:41.449 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.451289 | orchestrator | 09:54:41.449 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.451297 | orchestrator | 09:54:41.449 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.451305 | orchestrator | 09:54:41.449 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.451313 | orchestrator | 09:54:41.450 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.451354 | orchestrator | 09:54:41.450 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.451367 | orchestrator | 09:54:41.451 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.451408 | orchestrator | 09:54:41.451 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.451421 | orchestrator | 09:54:41.451 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.451461 | orchestrator | 09:54:41.451 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.451496 | orchestrator | 09:54:41.451 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.451531 | orchestrator | 09:54:41.451 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.451564 | orchestrator | 09:54:41.451 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.451577 | orchestrator | 09:54:41.451 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.451617 | orchestrator | 09:54:41.451 STDOUT terraform:  + name = "testbed-node-0" 2025-09-18 09:54:41.451630 | orchestrator | 09:54:41.451 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.451670 | orchestrator | 09:54:41.451 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.451738 | orchestrator | 09:54:41.451 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.451751 | orchestrator | 09:54:41.451 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.451760 | orchestrator | 09:54:41.451 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.451806 | orchestrator | 09:54:41.451 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.451819 | orchestrator | 09:54:41.451 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.451837 | orchestrator | 09:54:41.451 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.451867 | orchestrator | 09:54:41.451 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.451888 | orchestrator | 09:54:41.451 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.451918 | orchestrator | 09:54:41.451 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.451947 | orchestrator | 09:54:41.451 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.451983 | orchestrator | 09:54:41.451 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.451996 | orchestrator | 09:54:41.451 STDOUT terraform:  } 2025-09-18 09:54:41.452004 | orchestrator | 09:54:41.451 STDOUT terraform:  + network { 2025-09-18 09:54:41.452014 | orchestrator | 09:54:41.451 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.452050 | orchestrator | 09:54:41.452 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.452080 | orchestrator | 09:54:41.452 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.452110 | orchestrator | 09:54:41.452 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.452141 | orchestrator | 09:54:41.452 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.452172 | orchestrator | 09:54:41.452 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.452201 | orchestrator | 09:54:41.452 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.452212 | orchestrator | 09:54:41.452 STDOUT terraform:  } 2025-09-18 09:54:41.452222 | orchestrator | 09:54:41.452 STDOUT terraform:  } 2025-09-18 09:54:41.452271 | orchestrator | 09:54:41.452 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-09-18 09:54:41.452312 | orchestrator | 09:54:41.452 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.452346 | orchestrator | 09:54:41.452 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.452379 | orchestrator | 09:54:41.452 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.452413 | orchestrator | 09:54:41.452 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.452454 | orchestrator | 09:54:41.452 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.452462 | orchestrator | 09:54:41.452 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.452472 | orchestrator | 09:54:41.452 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.452509 | orchestrator | 09:54:41.452 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.452543 | orchestrator | 09:54:41.452 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.452571 | orchestrator | 09:54:41.452 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.452583 | orchestrator | 09:54:41.452 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.452626 | orchestrator | 09:54:41.452 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.452663 | orchestrator | 09:54:41.452 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.452691 | orchestrator | 09:54:41.452 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.452725 | orchestrator | 09:54:41.452 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.452737 | orchestrator | 09:54:41.452 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.452774 | orchestrator | 09:54:41.452 STDOUT terraform:  + name = "testbed-node-1" 2025-09-18 09:54:41.452787 | orchestrator | 09:54:41.452 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.452829 | orchestrator | 09:54:41.452 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.452863 | orchestrator | 09:54:41.452 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.452875 | orchestrator | 09:54:41.452 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.452913 | orchestrator | 09:54:41.452 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.452965 | orchestrator | 09:54:41.452 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.452977 | orchestrator | 09:54:41.452 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.453005 | orchestrator | 09:54:41.452 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.453038 | orchestrator | 09:54:41.452 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.453050 | orchestrator | 09:54:41.453 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.453082 | orchestrator | 09:54:41.453 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.453109 | orchestrator | 09:54:41.453 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.453148 | orchestrator | 09:54:41.453 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.453160 | orchestrator | 09:54:41.453 STDOUT terraform:  } 2025-09-18 09:54:41.453168 | orchestrator | 09:54:41.453 STDOUT terraform:  + network { 2025-09-18 09:54:41.453201 | orchestrator | 09:54:41.453 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.453305 | orchestrator | 09:54:41.453 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.453349 | orchestrator | 09:54:41.453 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.453364 | orchestrator | 09:54:41.453 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.453368 | orchestrator | 09:54:41.453 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.453372 | orchestrator | 09:54:41.453 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.453378 | orchestrator | 09:54:41.453 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.453382 | orchestrator | 09:54:41.453 STDOUT terraform:  } 2025-09-18 09:54:41.453388 | orchestrator | 09:54:41.453 STDOUT terraform:  } 2025-09-18 09:54:41.453435 | orchestrator | 09:54:41.453 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-09-18 09:54:41.453474 | orchestrator | 09:54:41.453 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.453497 | orchestrator | 09:54:41.453 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.453533 | orchestrator | 09:54:41.453 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.453567 | orchestrator | 09:54:41.453 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.453601 | orchestrator | 09:54:41.453 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.453624 | orchestrator | 09:54:41.453 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.453644 | orchestrator | 09:54:41.453 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.453679 | orchestrator | 09:54:41.453 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.453713 | orchestrator | 09:54:41.453 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.453741 | orchestrator | 09:54:41.453 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.453785 | orchestrator | 09:54:41.453 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.453791 | orchestrator | 09:54:41.453 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.453830 | orchestrator | 09:54:41.453 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.453863 | orchestrator | 09:54:41.453 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.453897 | orchestrator | 09:54:41.453 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.453922 | orchestrator | 09:54:41.453 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.453951 | orchestrator | 09:54:41.453 STDOUT terraform:  + name = "testbed-node-2" 2025-09-18 09:54:41.453975 | orchestrator | 09:54:41.453 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.454010 | orchestrator | 09:54:41.453 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.456457 | orchestrator | 09:54:41.454 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.456517 | orchestrator | 09:54:41.456 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.456635 | orchestrator | 09:54:41.456 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.456975 | orchestrator | 09:54:41.456 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.456983 | orchestrator | 09:54:41.456 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.457195 | orchestrator | 09:54:41.456 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.457226 | orchestrator | 09:54:41.457 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.457574 | orchestrator | 09:54:41.457 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.459708 | orchestrator | 09:54:41.457 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.459739 | orchestrator | 09:54:41.457 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.459744 | orchestrator | 09:54:41.457 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.459756 | orchestrator | 09:54:41.458 STDOUT terraform:  } 2025-09-18 09:54:41.459760 | orchestrator | 09:54:41.458 STDOUT terraform:  + network { 2025-09-18 09:54:41.459764 | orchestrator | 09:54:41.458 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.459768 | orchestrator | 09:54:41.458 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.459772 | orchestrator | 09:54:41.458 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.459776 | orchestrator | 09:54:41.458 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.459779 | orchestrator | 09:54:41.458 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.459783 | orchestrator | 09:54:41.458 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.459787 | orchestrator | 09:54:41.458 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.459791 | orchestrator | 09:54:41.459 STDOUT terraform:  } 2025-09-18 09:54:41.459796 | orchestrator | 09:54:41.459 STDOUT terraform:  } 2025-09-18 09:54:41.459800 | orchestrator | 09:54:41.459 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-09-18 09:54:41.459804 | orchestrator | 09:54:41.459 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.459808 | orchestrator | 09:54:41.459 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.459812 | orchestrator | 09:54:41.459 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.459819 | orchestrator | 09:54:41.459 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.459823 | orchestrator | 09:54:41.459 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.459827 | orchestrator | 09:54:41.459 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.459831 | orchestrator | 09:54:41.459 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.460035 | orchestrator | 09:54:41.459 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.460397 | orchestrator | 09:54:41.460 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.460598 | orchestrator | 09:54:41.460 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.461924 | orchestrator | 09:54:41.461 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.461962 | orchestrator | 09:54:41.461 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.461999 | orchestrator | 09:54:41.461 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.462353 | orchestrator | 09:54:41.461 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.462365 | orchestrator | 09:54:41.462 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.462390 | orchestrator | 09:54:41.462 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.462422 | orchestrator | 09:54:41.462 STDOUT terraform:  + name = "testbed-node-3" 2025-09-18 09:54:41.462448 | orchestrator | 09:54:41.462 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.462482 | orchestrator | 09:54:41.462 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.462515 | orchestrator | 09:54:41.462 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.462537 | orchestrator | 09:54:41.462 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.462572 | orchestrator | 09:54:41.462 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.462622 | orchestrator | 09:54:41.462 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.462638 | orchestrator | 09:54:41.462 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.462662 | orchestrator | 09:54:41.462 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.462690 | orchestrator | 09:54:41.462 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.462719 | orchestrator | 09:54:41.462 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.462747 | orchestrator | 09:54:41.462 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.462775 | orchestrator | 09:54:41.462 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.462815 | orchestrator | 09:54:41.462 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.462822 | orchestrator | 09:54:41.462 STDOUT terraform:  } 2025-09-18 09:54:41.462839 | orchestrator | 09:54:41.462 STDOUT terraform:  + network { 2025-09-18 09:54:41.462860 | orchestrator | 09:54:41.462 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.462889 | orchestrator | 09:54:41.462 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.462918 | orchestrator | 09:54:41.462 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.462950 | orchestrator | 09:54:41.462 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.462977 | orchestrator | 09:54:41.462 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.463008 | orchestrator | 09:54:41.462 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.463038 | orchestrator | 09:54:41.463 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.463052 | orchestrator | 09:54:41.463 STDOUT terraform:  } 2025-09-18 09:54:41.463059 | orchestrator | 09:54:41.463 STDOUT terraform:  } 2025-09-18 09:54:41.463104 | orchestrator | 09:54:41.463 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-09-18 09:54:41.463145 | orchestrator | 09:54:41.463 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.463196 | orchestrator | 09:54:41.463 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.463223 | orchestrator | 09:54:41.463 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.463255 | orchestrator | 09:54:41.463 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.463289 | orchestrator | 09:54:41.463 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.463311 | orchestrator | 09:54:41.463 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.463331 | orchestrator | 09:54:41.463 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.463369 | orchestrator | 09:54:41.463 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.463398 | orchestrator | 09:54:41.463 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.463427 | orchestrator | 09:54:41.463 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.463450 | orchestrator | 09:54:41.463 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.463483 | orchestrator | 09:54:41.463 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.463517 | orchestrator | 09:54:41.463 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.463550 | orchestrator | 09:54:41.463 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.463585 | orchestrator | 09:54:41.463 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.463607 | orchestrator | 09:54:41.463 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.463637 | orchestrator | 09:54:41.463 STDOUT terraform:  + name = "testbed-node-4" 2025-09-18 09:54:41.463660 | orchestrator | 09:54:41.463 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.463703 | orchestrator | 09:54:41.463 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.463736 | orchestrator | 09:54:41.463 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.463758 | orchestrator | 09:54:41.463 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.463792 | orchestrator | 09:54:41.463 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.463840 | orchestrator | 09:54:41.463 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.463866 | orchestrator | 09:54:41.463 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.463891 | orchestrator | 09:54:41.463 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.463919 | orchestrator | 09:54:41.463 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.463947 | orchestrator | 09:54:41.463 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.463975 | orchestrator | 09:54:41.463 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.464003 | orchestrator | 09:54:41.463 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.464038 | orchestrator | 09:54:41.463 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.464052 | orchestrator | 09:54:41.464 STDOUT terraform:  } 2025-09-18 09:54:41.464066 | orchestrator | 09:54:41.464 STDOUT terraform:  + network { 2025-09-18 09:54:41.464086 | orchestrator | 09:54:41.464 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.464115 | orchestrator | 09:54:41.464 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.464144 | orchestrator | 09:54:41.464 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.464173 | orchestrator | 09:54:41.464 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.464225 | orchestrator | 09:54:41.464 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.464256 | orchestrator | 09:54:41.464 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.464286 | orchestrator | 09:54:41.464 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.464301 | orchestrator | 09:54:41.464 STDOUT terraform:  } 2025-09-18 09:54:41.464313 | orchestrator | 09:54:41.464 STDOUT terraform:  } 2025-09-18 09:54:41.464356 | orchestrator | 09:54:41.464 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-09-18 09:54:41.464398 | orchestrator | 09:54:41.464 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-09-18 09:54:41.464430 | orchestrator | 09:54:41.464 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-09-18 09:54:41.464464 | orchestrator | 09:54:41.464 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-09-18 09:54:41.464496 | orchestrator | 09:54:41.464 STDOUT terraform:  + all_metadata = (known after apply) 2025-09-18 09:54:41.464529 | orchestrator | 09:54:41.464 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.464553 | orchestrator | 09:54:41.464 STDOUT terraform:  + availability_zone = "nova" 2025-09-18 09:54:41.464573 | orchestrator | 09:54:41.464 STDOUT terraform:  + config_drive = true 2025-09-18 09:54:41.464607 | orchestrator | 09:54:41.464 STDOUT terraform:  + created = (known after apply) 2025-09-18 09:54:41.464640 | orchestrator | 09:54:41.464 STDOUT terraform:  + flavor_id = (known after apply) 2025-09-18 09:54:41.464670 | orchestrator | 09:54:41.464 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-09-18 09:54:41.464692 | orchestrator | 09:54:41.464 STDOUT terraform:  + force_delete = false 2025-09-18 09:54:41.464726 | orchestrator | 09:54:41.464 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-09-18 09:54:41.464760 | orchestrator | 09:54:41.464 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.464794 | orchestrator | 09:54:41.464 STDOUT terraform:  + image_id = (known after apply) 2025-09-18 09:54:41.464827 | orchestrator | 09:54:41.464 STDOUT terraform:  + image_name = (known after apply) 2025-09-18 09:54:41.464850 | orchestrator | 09:54:41.464 STDOUT terraform:  + key_pair = "testbed" 2025-09-18 09:54:41.464881 | orchestrator | 09:54:41.464 STDOUT terraform:  + name = "testbed-node-5" 2025-09-18 09:54:41.464905 | orchestrator | 09:54:41.464 STDOUT terraform:  + power_state = "active" 2025-09-18 09:54:41.464940 | orchestrator | 09:54:41.464 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.464972 | orchestrator | 09:54:41.464 STDOUT terraform:  + security_groups = (known after apply) 2025-09-18 09:54:41.464994 | orchestrator | 09:54:41.464 STDOUT terraform:  + stop_before_destroy = false 2025-09-18 09:54:41.465028 | orchestrator | 09:54:41.464 STDOUT terraform:  + updated = (known after apply) 2025-09-18 09:54:41.465076 | orchestrator | 09:54:41.465 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-09-18 09:54:41.465092 | orchestrator | 09:54:41.465 STDOUT terraform:  + block_device { 2025-09-18 09:54:41.465115 | orchestrator | 09:54:41.465 STDOUT terraform:  + boot_index = 0 2025-09-18 09:54:41.465143 | orchestrator | 09:54:41.465 STDOUT terraform:  + delete_on_termination = false 2025-09-18 09:54:41.465171 | orchestrator | 09:54:41.465 STDOUT terraform:  + destination_type = "volume" 2025-09-18 09:54:41.465219 | orchestrator | 09:54:41.465 STDOUT terraform:  + multiattach = false 2025-09-18 09:54:41.465247 | orchestrator | 09:54:41.465 STDOUT terraform:  + source_type = "volume" 2025-09-18 09:54:41.465284 | orchestrator | 09:54:41.465 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.465291 | orchestrator | 09:54:41.465 STDOUT terraform:  } 2025-09-18 09:54:41.465307 | orchestrator | 09:54:41.465 STDOUT terraform:  + network { 2025-09-18 09:54:41.465327 | orchestrator | 09:54:41.465 STDOUT terraform:  + access_network = false 2025-09-18 09:54:41.465357 | orchestrator | 09:54:41.465 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-09-18 09:54:41.465385 | orchestrator | 09:54:41.465 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-09-18 09:54:41.465415 | orchestrator | 09:54:41.465 STDOUT terraform:  + mac = (known after apply) 2025-09-18 09:54:41.465444 | orchestrator | 09:54:41.465 STDOUT terraform:  + name = (known after apply) 2025-09-18 09:54:41.465478 | orchestrator | 09:54:41.465 STDOUT terraform:  + port = (known after apply) 2025-09-18 09:54:41.465504 | orchestrator | 09:54:41.465 STDOUT terraform:  + uuid = (known after apply) 2025-09-18 09:54:41.465510 | orchestrator | 09:54:41.465 STDOUT terraform:  } 2025-09-18 09:54:41.465525 | orchestrator | 09:54:41.465 STDOUT terraform:  } 2025-09-18 09:54:41.465559 | orchestrator | 09:54:41.465 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-09-18 09:54:41.465592 | orchestrator | 09:54:41.465 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-09-18 09:54:41.465618 | orchestrator | 09:54:41.465 STDOUT terraform:  + fingerprint = (known after apply) 2025-09-18 09:54:41.465645 | orchestrator | 09:54:41.465 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.465665 | orchestrator | 09:54:41.465 STDOUT terraform:  + name = "testbed" 2025-09-18 09:54:41.465690 | orchestrator | 09:54:41.465 STDOUT terraform:  + private_key = (sensitive value) 2025-09-18 09:54:41.465716 | orchestrator | 09:54:41.465 STDOUT terraform:  + public_key = (known after apply) 2025-09-18 09:54:41.465743 | orchestrator | 09:54:41.465 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.465772 | orchestrator | 09:54:41.465 STDOUT terraform:  + user_id = (known after apply) 2025-09-18 09:54:41.465778 | orchestrator | 09:54:41.465 STDOUT terraform:  } 2025-09-18 09:54:41.465827 | orchestrator | 09:54:41.465 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-09-18 09:54:41.469528 | orchestrator | 09:54:41.465 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.469953 | orchestrator | 09:54:41.469 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.469990 | orchestrator | 09:54:41.469 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.470030 | orchestrator | 09:54:41.469 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.470312 | orchestrator | 09:54:41.470 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.470437 | orchestrator | 09:54:41.470 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.470445 | orchestrator | 09:54:41.470 STDOUT terraform:  } 2025-09-18 09:54:41.470497 | orchestrator | 09:54:41.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-09-18 09:54:41.470543 | orchestrator | 09:54:41.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.470570 | orchestrator | 09:54:41.470 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.470599 | orchestrator | 09:54:41.470 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.470624 | orchestrator | 09:54:41.470 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.470651 | orchestrator | 09:54:41.470 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.470678 | orchestrator | 09:54:41.470 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.470685 | orchestrator | 09:54:41.470 STDOUT terraform:  } 2025-09-18 09:54:41.470735 | orchestrator | 09:54:41.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-09-18 09:54:41.470782 | orchestrator | 09:54:41.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.470810 | orchestrator | 09:54:41.470 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.470837 | orchestrator | 09:54:41.470 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.470864 | orchestrator | 09:54:41.470 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.470898 | orchestrator | 09:54:41.470 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.470926 | orchestrator | 09:54:41.470 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.470932 | orchestrator | 09:54:41.470 STDOUT terraform:  } 2025-09-18 09:54:41.470981 | orchestrator | 09:54:41.470 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-09-18 09:54:41.471027 | orchestrator | 09:54:41.470 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.471054 | orchestrator | 09:54:41.471 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.471082 | orchestrator | 09:54:41.471 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.471109 | orchestrator | 09:54:41.471 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.471136 | orchestrator | 09:54:41.471 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.471162 | orchestrator | 09:54:41.471 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.471209 | orchestrator | 09:54:41.471 STDOUT terraform:  } 2025-09-18 09:54:41.471253 | orchestrator | 09:54:41.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-09-18 09:54:41.471300 | orchestrator | 09:54:41.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.471326 | orchestrator | 09:54:41.471 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.471355 | orchestrator | 09:54:41.471 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.471383 | orchestrator | 09:54:41.471 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.471411 | orchestrator | 09:54:41.471 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.471438 | orchestrator | 09:54:41.471 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.471445 | orchestrator | 09:54:41.471 STDOUT terraform:  } 2025-09-18 09:54:41.471496 | orchestrator | 09:54:41.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-09-18 09:54:41.471544 | orchestrator | 09:54:41.471 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.471577 | orchestrator | 09:54:41.471 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.471600 | orchestrator | 09:54:41.471 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.471627 | orchestrator | 09:54:41.471 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.471655 | orchestrator | 09:54:41.471 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.471682 | orchestrator | 09:54:41.471 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.471688 | orchestrator | 09:54:41.471 STDOUT terraform:  } 2025-09-18 09:54:41.475310 | orchestrator | 09:54:41.471 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-09-18 09:54:41.475539 | orchestrator | 09:54:41.475 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.475640 | orchestrator | 09:54:41.475 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.475739 | orchestrator | 09:54:41.475 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.475812 | orchestrator | 09:54:41.475 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.475957 | orchestrator | 09:54:41.475 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.476054 | orchestrator | 09:54:41.475 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.476141 | orchestrator | 09:54:41.476 STDOUT terraform:  } 2025-09-18 09:54:41.476345 | orchestrator | 09:54:41.476 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-09-18 09:54:41.476563 | orchestrator | 09:54:41.476 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.476715 | orchestrator | 09:54:41.476 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.476852 | orchestrator | 09:54:41.476 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.476976 | orchestrator | 09:54:41.476 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.477088 | orchestrator | 09:54:41.476 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.477272 | orchestrator | 09:54:41.477 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.477339 | orchestrator | 09:54:41.477 STDOUT terraform:  } 2025-09-18 09:54:41.477512 | orchestrator | 09:54:41.477 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-09-18 09:54:41.477646 | orchestrator | 09:54:41.477 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-09-18 09:54:41.477754 | orchestrator | 09:54:41.477 STDOUT terraform:  + device = (known after apply) 2025-09-18 09:54:41.477879 | orchestrator | 09:54:41.477 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.477979 | orchestrator | 09:54:41.477 STDOUT terraform:  + instance_id = (known after apply) 2025-09-18 09:54:41.478098 | orchestrator | 09:54:41.477 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.478221 | orchestrator | 09:54:41.478 STDOUT terraform:  + volume_id = (known after apply) 2025-09-18 09:54:41.478287 | orchestrator | 09:54:41.478 STDOUT terraform:  } 2025-09-18 09:54:41.478542 | orchestrator | 09:54:41.478 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-09-18 09:54:41.478710 | orchestrator | 09:54:41.478 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-09-18 09:54:41.478821 | orchestrator | 09:54:41.478 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-18 09:54:41.478909 | orchestrator | 09:54:41.478 STDOUT terraform:  + floating_ip = (known after apply) 2025-09-18 09:54:41.479019 | orchestrator | 09:54:41.478 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.479118 | orchestrator | 09:54:41.479 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 09:54:41.479254 | orchestrator | 09:54:41.479 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.479332 | orchestrator | 09:54:41.479 STDOUT terraform:  } 2025-09-18 09:54:41.480043 | orchestrator | 09:54:41.479 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-09-18 09:54:41.480067 | orchestrator | 09:54:41.479 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-09-18 09:54:41.480071 | orchestrator | 09:54:41.479 STDOUT terraform:  + address = (known after apply) 2025-09-18 09:54:41.480076 | orchestrator | 09:54:41.479 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.480299 | orchestrator | 09:54:41.479 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-18 09:54:41.481195 | orchestrator | 09:54:41.480 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.481223 | orchestrator | 09:54:41.481 STDOUT terraform:  + fixed_ip = (known after apply) 2025-09-18 09:54:41.481228 | orchestrator | 09:54:41.481 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.481245 | orchestrator | 09:54:41.481 STDOUT terraform:  + pool = "public" 2025-09-18 09:54:41.481272 | orchestrator | 09:54:41.481 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 09:54:41.481297 | orchestrator | 09:54:41.481 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.481323 | orchestrator | 09:54:41.481 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.481346 | orchestrator | 09:54:41.481 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.481353 | orchestrator | 09:54:41.481 STDOUT terraform:  } 2025-09-18 09:54:41.481398 | orchestrator | 09:54:41.481 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-09-18 09:54:41.481447 | orchestrator | 09:54:41.481 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-09-18 09:54:41.481479 | orchestrator | 09:54:41.481 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.481518 | orchestrator | 09:54:41.481 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.481536 | orchestrator | 09:54:41.481 STDOUT terraform:  + availability_zone_hints = [ 2025-09-18 09:54:41.481552 | orchestrator | 09:54:41.481 STDOUT terraform:  + "nova", 2025-09-18 09:54:41.481566 | orchestrator | 09:54:41.481 STDOUT terraform:  ] 2025-09-18 09:54:41.481606 | orchestrator | 09:54:41.481 STDOUT terraform:  + dns_domain = (known after apply) 2025-09-18 09:54:41.481637 | orchestrator | 09:54:41.481 STDOUT terraform:  + external = (known after apply) 2025-09-18 09:54:41.481677 | orchestrator | 09:54:41.481 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.481710 | orchestrator | 09:54:41.481 STDOUT terraform:  + mtu = (known after apply) 2025-09-18 09:54:41.481750 | orchestrator | 09:54:41.481 STDOUT terraform:  + name = "net-testbed-management" 2025-09-18 09:54:41.481786 | orchestrator | 09:54:41.481 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.481820 | orchestrator | 09:54:41.481 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.481853 | orchestrator | 09:54:41.481 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.481891 | orchestrator | 09:54:41.481 STDOUT terraform:  + shared = (known after apply) 2025-09-18 09:54:41.481965 | orchestrator | 09:54:41.481 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.482001 | orchestrator | 09:54:41.481 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-09-18 09:54:41.482041 | orchestrator | 09:54:41.481 STDOUT terraform:  + segments (known after apply) 2025-09-18 09:54:41.482105 | orchestrator | 09:54:41.482 STDOUT terraform:  } 2025-09-18 09:54:41.482225 | orchestrator | 09:54:41.482 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-09-18 09:54:41.482550 | orchestrator | 09:54:41.482 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-09-18 09:54:41.482684 | orchestrator | 09:54:41.482 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.482767 | orchestrator | 09:54:41.482 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.482855 | orchestrator | 09:54:41.482 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.483000 | orchestrator | 09:54:41.482 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.483207 | orchestrator | 09:54:41.483 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.483251 | orchestrator | 09:54:41.483 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.483313 | orchestrator | 09:54:41.483 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.483426 | orchestrator | 09:54:41.483 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.483581 | orchestrator | 09:54:41.483 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.483677 | orchestrator | 09:54:41.483 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.483736 | orchestrator | 09:54:41.483 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.483847 | orchestrator | 09:54:41.483 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.483907 | orchestrator | 09:54:41.483 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.483944 | orchestrator | 09:54:41.483 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.484896 | orchestrator | 09:54:41.484 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.484918 | orchestrator | 09:54:41.484 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.484922 | orchestrator | 09:54:41.484 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.484926 | orchestrator | 09:54:41.484 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.484931 | orchestrator | 09:54:41.484 STDOUT terraform:  } 2025-09-18 09:54:41.484935 | orchestrator | 09:54:41.484 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.484939 | orchestrator | 09:54:41.484 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.484943 | orchestrator | 09:54:41.484 STDOUT terraform:  } 2025-09-18 09:54:41.484946 | orchestrator | 09:54:41.484 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.484950 | orchestrator | 09:54:41.484 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.484954 | orchestrator | 09:54:41.484 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-09-18 09:54:41.484958 | orchestrator | 09:54:41.484 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.484962 | orchestrator | 09:54:41.484 STDOUT terraform:  } 2025-09-18 09:54:41.484965 | orchestrator | 09:54:41.484 STDOUT terraform:  } 2025-09-18 09:54:41.484972 | orchestrator | 09:54:41.484 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-09-18 09:54:41.484976 | orchestrator | 09:54:41.484 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.484981 | orchestrator | 09:54:41.484 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.485006 | orchestrator | 09:54:41.484 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.485040 | orchestrator | 09:54:41.485 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.485074 | orchestrator | 09:54:41.485 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.485110 | orchestrator | 09:54:41.485 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.485164 | orchestrator | 09:54:41.485 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.485198 | orchestrator | 09:54:41.485 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.485234 | orchestrator | 09:54:41.485 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.485261 | orchestrator | 09:54:41.485 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.485302 | orchestrator | 09:54:41.485 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.485332 | orchestrator | 09:54:41.485 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.485367 | orchestrator | 09:54:41.485 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.485403 | orchestrator | 09:54:41.485 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.485440 | orchestrator | 09:54:41.485 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.485473 | orchestrator | 09:54:41.485 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.485507 | orchestrator | 09:54:41.485 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.485525 | orchestrator | 09:54:41.485 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.485556 | orchestrator | 09:54:41.485 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.485563 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485582 | orchestrator | 09:54:41.485 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.485609 | orchestrator | 09:54:41.485 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.485616 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485646 | orchestrator | 09:54:41.485 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.485666 | orchestrator | 09:54:41.485 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.485690 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485708 | orchestrator | 09:54:41.485 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.485735 | orchestrator | 09:54:41.485 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.485748 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485772 | orchestrator | 09:54:41.485 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.485786 | orchestrator | 09:54:41.485 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.485809 | orchestrator | 09:54:41.485 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-09-18 09:54:41.485839 | orchestrator | 09:54:41.485 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.485845 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485863 | orchestrator | 09:54:41.485 STDOUT terraform:  } 2025-09-18 09:54:41.485909 | orchestrator | 09:54:41.485 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-09-18 09:54:41.485953 | orchestrator | 09:54:41.485 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.485990 | orchestrator | 09:54:41.485 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.486054 | orchestrator | 09:54:41.485 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.486086 | orchestrator | 09:54:41.486 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.486124 | orchestrator | 09:54:41.486 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.486158 | orchestrator | 09:54:41.486 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.486217 | orchestrator | 09:54:41.486 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.486249 | orchestrator | 09:54:41.486 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.486284 | orchestrator | 09:54:41.486 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.486347 | orchestrator | 09:54:41.486 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.486354 | orchestrator | 09:54:41.486 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.486387 | orchestrator | 09:54:41.486 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.486422 | orchestrator | 09:54:41.486 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.486459 | orchestrator | 09:54:41.486 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.486495 | orchestrator | 09:54:41.486 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.486530 | orchestrator | 09:54:41.486 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.486570 | orchestrator | 09:54:41.486 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.486578 | orchestrator | 09:54:41.486 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.486615 | orchestrator | 09:54:41.486 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.486622 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486640 | orchestrator | 09:54:41.486 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.486667 | orchestrator | 09:54:41.486 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.486681 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486701 | orchestrator | 09:54:41.486 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.486729 | orchestrator | 09:54:41.486 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.486744 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486764 | orchestrator | 09:54:41.486 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.486791 | orchestrator | 09:54:41.486 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.486804 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486829 | orchestrator | 09:54:41.486 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.486843 | orchestrator | 09:54:41.486 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.486867 | orchestrator | 09:54:41.486 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-09-18 09:54:41.486901 | orchestrator | 09:54:41.486 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.486907 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486922 | orchestrator | 09:54:41.486 STDOUT terraform:  } 2025-09-18 09:54:41.486967 | orchestrator | 09:54:41.486 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-09-18 09:54:41.487010 | orchestrator | 09:54:41.486 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.487045 | orchestrator | 09:54:41.487 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.487080 | orchestrator | 09:54:41.487 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.487115 | orchestrator | 09:54:41.487 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.487148 | orchestrator | 09:54:41.487 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.487196 | orchestrator | 09:54:41.487 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.487228 | orchestrator | 09:54:41.487 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.487262 | orchestrator | 09:54:41.487 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.487297 | orchestrator | 09:54:41.487 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.487339 | orchestrator | 09:54:41.487 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.487374 | orchestrator | 09:54:41.487 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.487409 | orchestrator | 09:54:41.487 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.487444 | orchestrator | 09:54:41.487 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.487479 | orchestrator | 09:54:41.487 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.487516 | orchestrator | 09:54:41.487 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.487550 | orchestrator | 09:54:41.487 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.487585 | orchestrator | 09:54:41.487 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.487604 | orchestrator | 09:54:41.487 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.487632 | orchestrator | 09:54:41.487 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.487645 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487664 | orchestrator | 09:54:41.487 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.487692 | orchestrator | 09:54:41.487 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.487706 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487725 | orchestrator | 09:54:41.487 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.487754 | orchestrator | 09:54:41.487 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.487765 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487787 | orchestrator | 09:54:41.487 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.487814 | orchestrator | 09:54:41.487 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.487820 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487845 | orchestrator | 09:54:41.487 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.487852 | orchestrator | 09:54:41.487 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.487879 | orchestrator | 09:54:41.487 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-09-18 09:54:41.487913 | orchestrator | 09:54:41.487 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.487919 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487925 | orchestrator | 09:54:41.487 STDOUT terraform:  } 2025-09-18 09:54:41.487970 | orchestrator | 09:54:41.487 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-09-18 09:54:41.488013 | orchestrator | 09:54:41.487 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.488049 | orchestrator | 09:54:41.488 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.488084 | orchestrator | 09:54:41.488 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.488119 | orchestrator | 09:54:41.488 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.488154 | orchestrator | 09:54:41.488 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.488198 | orchestrator | 09:54:41.488 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.488233 | orchestrator | 09:54:41.488 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.488270 | orchestrator | 09:54:41.488 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.488302 | orchestrator | 09:54:41.488 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.488339 | orchestrator | 09:54:41.488 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.488373 | orchestrator | 09:54:41.488 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.488408 | orchestrator | 09:54:41.488 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.488441 | orchestrator | 09:54:41.488 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.488475 | orchestrator | 09:54:41.488 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.488509 | orchestrator | 09:54:41.488 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.488544 | orchestrator | 09:54:41.488 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.488578 | orchestrator | 09:54:41.488 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.488598 | orchestrator | 09:54:41.488 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.488626 | orchestrator | 09:54:41.488 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.488636 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488655 | orchestrator | 09:54:41.488 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.488684 | orchestrator | 09:54:41.488 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.488698 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488718 | orchestrator | 09:54:41.488 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.488745 | orchestrator | 09:54:41.488 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.488759 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488778 | orchestrator | 09:54:41.488 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.488805 | orchestrator | 09:54:41.488 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.488812 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488839 | orchestrator | 09:54:41.488 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.488852 | orchestrator | 09:54:41.488 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.488877 | orchestrator | 09:54:41.488 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-09-18 09:54:41.488905 | orchestrator | 09:54:41.488 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.488919 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488937 | orchestrator | 09:54:41.488 STDOUT terraform:  } 2025-09-18 09:54:41.488982 | orchestrator | 09:54:41.488 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-09-18 09:54:41.489907 | orchestrator | 09:54:41.488 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.490045 | orchestrator | 09:54:41.489 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.490132 | orchestrator | 09:54:41.490 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.490246 | orchestrator | 09:54:41.490 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.490326 | orchestrator | 09:54:41.490 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.490415 | orchestrator | 09:54:41.490 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.490489 | orchestrator | 09:54:41.490 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.490571 | orchestrator | 09:54:41.490 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.490650 | orchestrator | 09:54:41.490 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.490730 | orchestrator | 09:54:41.490 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.490809 | orchestrator | 09:54:41.490 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.490887 | orchestrator | 09:54:41.490 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.490991 | orchestrator | 09:54:41.490 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.491066 | orchestrator | 09:54:41.490 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.491158 | orchestrator | 09:54:41.491 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.491252 | orchestrator | 09:54:41.491 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.491332 | orchestrator | 09:54:41.491 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.491376 | orchestrator | 09:54:41.491 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.491437 | orchestrator | 09:54:41.491 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.491466 | orchestrator | 09:54:41.491 STDOUT terraform:  } 2025-09-18 09:54:41.491510 | orchestrator | 09:54:41.491 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.491573 | orchestrator | 09:54:41.491 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.491604 | orchestrator | 09:54:41.491 STDOUT terraform:  } 2025-09-18 09:54:41.491646 | orchestrator | 09:54:41.491 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.491705 | orchestrator | 09:54:41.491 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.491733 | orchestrator | 09:54:41.491 STDOUT terraform:  } 2025-09-18 09:54:41.491775 | orchestrator | 09:54:41.491 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.491837 | orchestrator | 09:54:41.491 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.491865 | orchestrator | 09:54:41.491 STDOUT terraform:  } 2025-09-18 09:54:41.491914 | orchestrator | 09:54:41.491 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.491945 | orchestrator | 09:54:41.491 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.491998 | orchestrator | 09:54:41.491 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-09-18 09:54:41.492062 | orchestrator | 09:54:41.491 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.492092 | orchestrator | 09:54:41.492 STDOUT terraform:  } 2025-09-18 09:54:41.492120 | orchestrator | 09:54:41.492 STDOUT terraform:  } 2025-09-18 09:54:41.492236 | orchestrator | 09:54:41.492 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-09-18 09:54:41.492332 | orchestrator | 09:54:41.492 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-09-18 09:54:41.492410 | orchestrator | 09:54:41.492 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.492487 | orchestrator | 09:54:41.492 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-09-18 09:54:41.492561 | orchestrator | 09:54:41.492 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-09-18 09:54:41.492639 | orchestrator | 09:54:41.492 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.492716 | orchestrator | 09:54:41.492 STDOUT terraform:  + device_id = (known after apply) 2025-09-18 09:54:41.492791 | orchestrator | 09:54:41.492 STDOUT terraform:  + device_owner = (known after apply) 2025-09-18 09:54:41.492875 | orchestrator | 09:54:41.492 STDOUT terraform:  + dns_assignment = (known after apply) 2025-09-18 09:54:41.492952 | orchestrator | 09:54:41.492 STDOUT terraform:  + dns_name = (known after apply) 2025-09-18 09:54:41.493030 | orchestrator | 09:54:41.492 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.493106 | orchestrator | 09:54:41.493 STDOUT terraform:  + mac_address = (known after apply) 2025-09-18 09:54:41.493198 | orchestrator | 09:54:41.493 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.493275 | orchestrator | 09:54:41.493 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-09-18 09:54:41.493353 | orchestrator | 09:54:41.493 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-09-18 09:54:41.493432 | orchestrator | 09:54:41.493 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.493508 | orchestrator | 09:54:41.493 STDOUT terraform:  + security_group_ids = (known after apply) 2025-09-18 09:54:41.493584 | orchestrator | 09:54:41.493 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.493626 | orchestrator | 09:54:41.493 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.493687 | orchestrator | 09:54:41.493 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-09-18 09:54:41.493716 | orchestrator | 09:54:41.493 STDOUT terraform:  } 2025-09-18 09:54:41.493757 | orchestrator | 09:54:41.493 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.493818 | orchestrator | 09:54:41.493 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-09-18 09:54:41.493848 | orchestrator | 09:54:41.493 STDOUT terraform:  } 2025-09-18 09:54:41.493889 | orchestrator | 09:54:41.493 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.493950 | orchestrator | 09:54:41.493 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-09-18 09:54:41.493979 | orchestrator | 09:54:41.493 STDOUT terraform:  } 2025-09-18 09:54:41.494042 | orchestrator | 09:54:41.493 STDOUT terraform:  + allowed_address_pairs { 2025-09-18 09:54:41.494114 | orchestrator | 09:54:41.494 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-09-18 09:54:41.501505 | orchestrator | 09:54:41.501 STDOUT terraform:  } 2025-09-18 09:54:41.502911 | orchestrator | 09:54:41.501 STDOUT terraform:  + binding (known after apply) 2025-09-18 09:54:41.504123 | orchestrator | 09:54:41.503 STDOUT terraform:  + fixed_ip { 2025-09-18 09:54:41.505450 | orchestrator | 09:54:41.504 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-09-18 09:54:41.506124 | orchestrator | 09:54:41.505 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.506156 | orchestrator | 09:54:41.506 STDOUT terraform:  } 2025-09-18 09:54:41.506195 | orchestrator | 09:54:41.506 STDOUT terraform:  } 2025-09-18 09:54:41.506252 | orchestrator | 09:54:41.506 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-09-18 09:54:41.506308 | orchestrator | 09:54:41.506 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-09-18 09:54:41.506336 | orchestrator | 09:54:41.506 STDOUT terraform:  + force_destroy = false 2025-09-18 09:54:41.506374 | orchestrator | 09:54:41.506 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.507028 | orchestrator | 09:54:41.506 STDOUT terraform:  + port_id = (known after apply) 2025-09-18 09:54:41.507767 | orchestrator | 09:54:41.507 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.508606 | orchestrator | 09:54:41.507 STDOUT terraform:  + router_id = (known after apply) 2025-09-18 09:54:41.509801 | orchestrator | 09:54:41.509 STDOUT terraform:  + subnet_id = (known after apply) 2025-09-18 09:54:41.510817 | orchestrator | 09:54:41.510 STDOUT terraform:  } 2025-09-18 09:54:41.511526 | orchestrator | 09:54:41.511 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-09-18 09:54:41.511929 | orchestrator | 09:54:41.511 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-09-18 09:54:41.512249 | orchestrator | 09:54:41.512 STDOUT terraform:  + admin_state_up = (known after apply) 2025-09-18 09:54:41.512323 | orchestrator | 09:54:41.512 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.512706 | orchestrator | 09:54:41.512 STDOUT terraform:  + availability_zone_hints = [ 2025-09-18 09:54:41.512734 | orchestrator | 09:54:41.512 STDOUT terraform:  + "nova", 2025-09-18 09:54:41.512784 | orchestrator | 09:54:41.512 STDOUT terraform:  ] 2025-09-18 09:54:41.513073 | orchestrator | 09:54:41.512 STDOUT terraform:  + distributed = (known after apply) 2025-09-18 09:54:41.513360 | orchestrator | 09:54:41.513 STDOUT terraform:  + enable_snat = (known after apply) 2025-09-18 09:54:41.513633 | orchestrator | 09:54:41.513 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-09-18 09:54:41.513818 | orchestrator | 09:54:41.513 STDOUT terraform:  + external_qos_policy_id = (known after apply) 2025-09-18 09:54:41.513893 | orchestrator | 09:54:41.513 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.513947 | orchestrator | 09:54:41.513 STDOUT terraform:  + name = "testbed" 2025-09-18 09:54:41.514004 | orchestrator | 09:54:41.513 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.514109 | orchestrator | 09:54:41.514 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.514150 | orchestrator | 09:54:41.514 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-09-18 09:54:41.514201 | orchestrator | 09:54:41.514 STDOUT terraform:  } 2025-09-18 09:54:41.517427 | orchestrator | 09:54:41.514 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-09-18 09:54:41.518297 | orchestrator | 09:54:41.517 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-09-18 09:54:41.519411 | orchestrator | 09:54:41.518 STDOUT terraform:  + description = "ssh" 2025-09-18 09:54:41.520442 | orchestrator | 09:54:41.519 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.521487 | orchestrator | 09:54:41.520 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.522176 | orchestrator | 09:54:41.521 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.522832 | orchestrator | 09:54:41.522 STDOUT terraform:  + port_range_max = 22 2025-09-18 09:54:41.523694 | orchestrator | 09:54:41.522 STDOUT terraform:  + port_range_min = 22 2025-09-18 09:54:41.524503 | orchestrator | 09:54:41.522 STDOUT terraform:  + protocol = "tcp" 2025-09-18 09:54:41.524722 | orchestrator | 09:54:41.522 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.525042 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.525107 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.525146 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.525354 | orchestrator | 09:54:41.522 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.525360 | orchestrator | 09:54:41.522 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.525372 | orchestrator | 09:54:41.522 STDOUT terraform:  } 2025-09-18 09:54:41.525400 | orchestrator | 09:54:41.522 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-09-18 09:54:41.525420 | orchestrator | 09:54:41.522 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-09-18 09:54:41.525461 | orchestrator | 09:54:41.522 STDOUT terraform:  + description = "wireguard" 2025-09-18 09:54:41.525482 | orchestrator | 09:54:41.522 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.525528 | orchestrator | 09:54:41.522 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.525577 | orchestrator | 09:54:41.522 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.525617 | orchestrator | 09:54:41.522 STDOUT terraform:  + port_range_max = 51820 2025-09-18 09:54:41.525710 | orchestrator | 09:54:41.522 STDOUT terraform:  + port_range_min = 51820 2025-09-18 09:54:41.525753 | orchestrator | 09:54:41.522 STDOUT terraform:  + protocol = "udp" 2025-09-18 09:54:41.525793 | orchestrator | 09:54:41.522 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.525815 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.525856 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.525883 | orchestrator | 09:54:41.522 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.525940 | orchestrator | 09:54:41.522 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.525983 | orchestrator | 09:54:41.523 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.526033 | orchestrator | 09:54:41.523 STDOUT terraform:  } 2025-09-18 09:54:41.526039 | orchestrator | 09:54:41.523 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-09-18 09:54:41.526051 | orchestrator | 09:54:41.523 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-09-18 09:54:41.526100 | orchestrator | 09:54:41.523 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.526141 | orchestrator | 09:54:41.523 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.526200 | orchestrator | 09:54:41.523 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.526221 | orchestrator | 09:54:41.523 STDOUT terraform:  + protocol = "tcp" 2025-09-18 09:54:41.526241 | orchestrator | 09:54:41.523 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.526353 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.526403 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.526450 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-18 09:54:41.526503 | orchestrator | 09:54:41.523 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.526552 | orchestrator | 09:54:41.523 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.526594 | orchestrator | 09:54:41.523 STDOUT terraform:  } 2025-09-18 09:54:41.526713 | orchestrator | 09:54:41.523 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-09-18 09:54:41.526868 | orchestrator | 09:54:41.523 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-09-18 09:54:41.527321 | orchestrator | 09:54:41.523 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.527866 | orchestrator | 09:54:41.523 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.528129 | orchestrator | 09:54:41.523 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.528771 | orchestrator | 09:54:41.523 STDOUT terraform:  + protocol = "udp" 2025-09-18 09:54:41.529173 | orchestrator | 09:54:41.523 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.529620 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.529687 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.529796 | orchestrator | 09:54:41.523 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-09-18 09:54:41.530036 | orchestrator | 09:54:41.523 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.530116 | orchestrator | 09:54:41.523 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.530602 | orchestrator | 09:54:41.523 STDOUT terraform:  } 2025-09-18 09:54:41.531065 | orchestrator | 09:54:41.523 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-09-18 09:54:41.531636 | orchestrator | 09:54:41.524 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-09-18 09:54:41.531938 | orchestrator | 09:54:41.524 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.532325 | orchestrator | 09:54:41.524 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.532885 | orchestrator | 09:54:41.524 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.532897 | orchestrator | 09:54:41.524 STDOUT terraform:  + protocol = "icmp" 2025-09-18 09:54:41.533424 | orchestrator | 09:54:41.524 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.533430 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.533434 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.533438 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.533442 | orchestrator | 09:54:41.524 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.533445 | orchestrator | 09:54:41.524 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.533449 | orchestrator | 09:54:41.524 STDOUT terraform:  } 2025-09-18 09:54:41.533493 | orchestrator | 09:54:41.524 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-09-18 09:54:41.533531 | orchestrator | 09:54:41.524 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-09-18 09:54:41.533543 | orchestrator | 09:54:41.524 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.533622 | orchestrator | 09:54:41.524 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.533665 | orchestrator | 09:54:41.524 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.533718 | orchestrator | 09:54:41.524 STDOUT terraform:  + protocol = "tcp" 2025-09-18 09:54:41.533829 | orchestrator | 09:54:41.524 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.533841 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.533845 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.533848 | orchestrator | 09:54:41.524 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.533885 | orchestrator | 09:54:41.524 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.533964 | orchestrator | 09:54:41.524 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.534003 | orchestrator | 09:54:41.524 STDOUT terraform:  } 2025-09-18 09:54:41.534035 | orchestrator | 09:54:41.524 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-09-18 09:54:41.534039 | orchestrator | 09:54:41.524 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-09-18 09:54:41.534043 | orchestrator | 09:54:41.525 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.534108 | orchestrator | 09:54:41.525 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.534126 | orchestrator | 09:54:41.525 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.534131 | orchestrator | 09:54:41.525 STDOUT terraform:  + protocol = "udp" 2025-09-18 09:54:41.534237 | orchestrator | 09:54:41.525 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.534242 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.534254 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.534350 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.534355 | orchestrator | 09:54:41.525 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.534397 | orchestrator | 09:54:41.525 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.534437 | orchestrator | 09:54:41.525 STDOUT terraform:  } 2025-09-18 09:54:41.534448 | orchestrator | 09:54:41.525 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-09-18 09:54:41.534452 | orchestrator | 09:54:41.525 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-09-18 09:54:41.534456 | orchestrator | 09:54:41.525 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.534477 | orchestrator | 09:54:41.525 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.534534 | orchestrator | 09:54:41.525 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.534587 | orchestrator | 09:54:41.525 STDOUT terraform:  + protocol = "icmp" 2025-09-18 09:54:41.534635 | orchestrator | 09:54:41.525 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.534663 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.534702 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.534721 | orchestrator | 09:54:41.525 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.534812 | orchestrator | 09:54:41.525 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.534823 | orchestrator | 09:54:41.525 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.534827 | orchestrator | 09:54:41.525 STDOUT terraform:  } 2025-09-18 09:54:41.534850 | orchestrator | 09:54:41.525 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-09-18 09:54:41.534894 | orchestrator | 09:54:41.525 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-09-18 09:54:41.534913 | orchestrator | 09:54:41.525 STDOUT terraform:  + description = "vrrp" 2025-09-18 09:54:41.534917 | orchestrator | 09:54:41.525 STDOUT terraform:  + direction = "ingress" 2025-09-18 09:54:41.534921 | orchestrator | 09:54:41.526 STDOUT terraform:  + ethertype = "IPv4" 2025-09-18 09:54:41.535107 | orchestrator | 09:54:41.526 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.535149 | orchestrator | 09:54:41.526 STDOUT terraform:  + protocol = "112" 2025-09-18 09:54:41.535153 | orchestrator | 09:54:41.526 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.535267 | orchestrator | 09:54:41.526 STDOUT terraform:  + remote_address_group_id = (known after apply) 2025-09-18 09:54:41.535310 | orchestrator | 09:54:41.526 STDOUT terraform:  + remote_group_id = (known after apply) 2025-09-18 09:54:41.535331 | orchestrator | 09:54:41.526 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-09-18 09:54:41.535441 | orchestrator | 09:54:41.526 STDOUT terraform:  + security_group_id = (known after apply) 2025-09-18 09:54:41.535452 | orchestrator | 09:54:41.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.535504 | orchestrator | 09:54:41.526 STDOUT terraform:  } 2025-09-18 09:54:41.535517 | orchestrator | 09:54:41.526 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-09-18 09:54:41.535535 | orchestrator | 09:54:41.526 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-09-18 09:54:41.535545 | orchestrator | 09:54:41.526 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.535550 | orchestrator | 09:54:41.526 STDOUT terraform:  + description = "management security group" 2025-09-18 09:54:41.535584 | orchestrator | 09:54:41.526 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.535597 | orchestrator | 09:54:41.526 STDOUT terraform:  + name = "testbed-management" 2025-09-18 09:54:41.535614 | orchestrator | 09:54:41.526 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.535663 | orchestrator | 09:54:41.526 STDOUT terraform:  + stateful = (known after apply) 2025-09-18 09:54:41.535702 | orchestrator | 09:54:41.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.535719 | orchestrator | 09:54:41.526 STDOUT terraform:  } 2025-09-18 09:54:41.535724 | orchestrator | 09:54:41.526 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-09-18 09:54:41.535728 | orchestrator | 09:54:41.526 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-09-18 09:54:41.535808 | orchestrator | 09:54:41.526 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.535813 | orchestrator | 09:54:41.526 STDOUT terraform:  + description = "node security group" 2025-09-18 09:54:41.535824 | orchestrator | 09:54:41.526 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.535828 | orchestrator | 09:54:41.526 STDOUT terraform:  + name = "testbed-node" 2025-09-18 09:54:41.535896 | orchestrator | 09:54:41.526 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.535916 | orchestrator | 09:54:41.526 STDOUT terraform:  + stateful = (known after apply) 2025-09-18 09:54:41.535956 | orchestrator | 09:54:41.526 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.536107 | orchestrator | 09:54:41.526 STDOUT terraform:  } 2025-09-18 09:54:41.536119 | orchestrator | 09:54:41.526 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-09-18 09:54:41.536143 | orchestrator | 09:54:41.526 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-09-18 09:54:41.536151 | orchestrator | 09:54:41.527 STDOUT terraform:  + all_tags = (known after apply) 2025-09-18 09:54:41.536230 | orchestrator | 09:54:41.527 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-09-18 09:54:41.536242 | orchestrator | 09:54:41.527 STDOUT terraform:  + dns_nameservers = [ 2025-09-18 09:54:41.536247 | orchestrator | 09:54:41.527 STDOUT terraform:  + "8.8.8.8", 2025-09-18 09:54:41.536250 | orchestrator | 09:54:41.527 STDOUT terraform:  + "9.9.9.9", 2025-09-18 09:54:41.536331 | orchestrator | 09:54:41.527 STDOUT terraform:  ] 2025-09-18 09:54:41.536352 | orchestrator | 09:54:41.527 STDOUT terraform:  + enable_dhcp = true 2025-09-18 09:54:41.536414 | orchestrator | 09:54:41.527 STDOUT terraform:  + gateway_ip = (known after apply) 2025-09-18 09:54:41.536428 | orchestrator | 09:54:41.527 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.536439 | orchestrator | 09:54:41.527 STDOUT terraform:  + ip_version = 4 2025-09-18 09:54:41.536543 | orchestrator | 09:54:41.527 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-09-18 09:54:41.536555 | orchestrator | 09:54:41.527 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-09-18 09:54:41.536559 | orchestrator | 09:54:41.527 STDOUT terraform:  + name = "subnet-testbed-management" 2025-09-18 09:54:41.536608 | orchestrator | 09:54:41.527 STDOUT terraform:  + network_id = (known after apply) 2025-09-18 09:54:41.536636 | orchestrator | 09:54:41.527 STDOUT terraform:  + no_gateway = false 2025-09-18 09:54:41.536663 | orchestrator | 09:54:41.527 STDOUT terraform:  + region = (known after apply) 2025-09-18 09:54:41.536667 | orchestrator | 09:54:41.527 STDOUT terraform:  + service_types = (known after apply) 2025-09-18 09:54:41.536688 | orchestrator | 09:54:41.527 STDOUT terraform:  + tenant_id = (known after apply) 2025-09-18 09:54:41.536742 | orchestrator | 09:54:41.527 STDOUT terraform:  + allocation_pool { 2025-09-18 09:54:41.536809 | orchestrator | 09:54:41.527 STDOUT terraform:  + end = "192.168.31.250" 2025-09-18 09:54:41.536822 | orchestrator | 09:54:41.527 STDOUT terraform:  + start = "192.168.31.200" 2025-09-18 09:54:41.536834 | orchestrator | 09:54:41.527 STDOUT terraform:  } 2025-09-18 09:54:41.537455 | orchestrator | 09:54:41.527 STDOUT terraform:  } 2025-09-18 09:54:41.537487 | orchestrator | 09:54:41.527 STDOUT terraform:  # terraform_data.image will be created 2025-09-18 09:54:41.537493 | orchestrator | 09:54:41.527 STDOUT terraform:  + resource "terraform_data" "image" { 2025-09-18 09:54:41.537497 | orchestrator | 09:54:41.527 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.537501 | orchestrator | 09:54:41.528 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-18 09:54:41.537506 | orchestrator | 09:54:41.528 STDOUT terraform:  + output = (known after apply) 2025-09-18 09:54:41.537513 | orchestrator | 09:54:41.528 STDOUT terraform:  } 2025-09-18 09:54:41.537517 | orchestrator | 09:54:41.528 STDOUT terraform:  # terraform_data.image_node will be created 2025-09-18 09:54:41.537526 | orchestrator | 09:54:41.528 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-09-18 09:54:41.537530 | orchestrator | 09:54:41.528 STDOUT terraform:  + id = (known after apply) 2025-09-18 09:54:41.537533 | orchestrator | 09:54:41.528 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-09-18 09:54:41.537537 | orchestrator | 09:54:41.528 STDOUT terraform:  + output = (known after apply) 2025-09-18 09:54:41.537541 | orchestrator | 09:54:41.528 STDOUT terraform:  } 2025-09-18 09:54:41.537544 | orchestrator | 09:54:41.528 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-09-18 09:54:41.537549 | orchestrator | 09:54:41.528 STDOUT terraform: Changes to Outputs: 2025-09-18 09:54:41.537560 | orchestrator | 09:54:41.528 STDOUT terraform:  + manager_address = (sensitive value) 2025-09-18 09:54:41.537564 | orchestrator | 09:54:41.528 STDOUT terraform:  + private_key = (sensitive value) 2025-09-18 09:54:41.643760 | orchestrator | 09:54:41.643 STDOUT terraform: terraform_data.image: Creating... 2025-09-18 09:54:41.643803 | orchestrator | 09:54:41.643 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=f1bfd036-9393-dcd4-6553-868f6df3ed5d] 2025-09-18 09:54:41.643811 | orchestrator | 09:54:41.643 STDOUT terraform: terraform_data.image_node: Creating... 2025-09-18 09:54:41.644450 | orchestrator | 09:54:41.644 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=e7d9daaa-1d8a-0b93-4371-2c93bd054958] 2025-09-18 09:54:41.655780 | orchestrator | 09:54:41.655 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-09-18 09:54:41.655821 | orchestrator | 09:54:41.655 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-09-18 09:54:41.662796 | orchestrator | 09:54:41.662 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-09-18 09:54:41.667229 | orchestrator | 09:54:41.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-09-18 09:54:41.667980 | orchestrator | 09:54:41.667 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-09-18 09:54:41.668110 | orchestrator | 09:54:41.668 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-09-18 09:54:41.673461 | orchestrator | 09:54:41.673 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-09-18 09:54:41.674131 | orchestrator | 09:54:41.674 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-09-18 09:54:41.678456 | orchestrator | 09:54:41.678 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-09-18 09:54:41.679504 | orchestrator | 09:54:41.679 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-09-18 09:54:42.105159 | orchestrator | 09:54:42.104 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-18 09:54:42.652346 | orchestrator | 09:54:42.108 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=846820b2-039e-4b42-adad-daf72e0f8ea4] 2025-09-18 09:54:42.652392 | orchestrator | 09:54:42.112 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-09-18 09:54:42.652398 | orchestrator | 09:54:42.118 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-09-18 09:54:42.652404 | orchestrator | 09:54:42.163 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-09-18 09:54:42.652408 | orchestrator | 09:54:42.169 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-09-18 09:54:42.730959 | orchestrator | 09:54:42.730 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 1s [id=97e2758c-98d2-48a4-a6e3-1f9c4654d5c7] 2025-09-18 09:54:42.740574 | orchestrator | 09:54:42.740 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-09-18 09:54:45.326096 | orchestrator | 09:54:45.325 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 3s [id=a9e5fe38-9aa1-47d1-b292-dbaa7924ce64] 2025-09-18 09:54:45.329415 | orchestrator | 09:54:45.329 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 3s [id=a69d22c4-e927-4699-a327-d057749b4040] 2025-09-18 09:54:45.335777 | orchestrator | 09:54:45.335 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-09-18 09:54:45.335856 | orchestrator | 09:54:45.335 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-09-18 09:54:45.338367 | orchestrator | 09:54:45.338 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 3s [id=56fd191f-3e0c-491f-8cd9-aabd31cc0836] 2025-09-18 09:54:45.351044 | orchestrator | 09:54:45.350 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-09-18 09:54:45.382604 | orchestrator | 09:54:45.382 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 3s [id=00278712-8848-43cc-b367-9df7adc0d1b4] 2025-09-18 09:54:45.387949 | orchestrator | 09:54:45.387 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 3s [id=649a7a14-18b6-4e11-8675-ab8fe85002f2] 2025-09-18 09:54:45.391562 | orchestrator | 09:54:45.391 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-09-18 09:54:45.393552 | orchestrator | 09:54:45.393 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-09-18 09:54:45.408878 | orchestrator | 09:54:45.408 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 3s [id=e49cb3c6-bfd0-4159-abb8-b26259c9fbe2] 2025-09-18 09:54:45.420471 | orchestrator | 09:54:45.420 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-09-18 09:54:45.421137 | orchestrator | 09:54:45.420 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 3s [id=32515b61-c47f-4019-8995-ef0e516a1d70] 2025-09-18 09:54:45.431452 | orchestrator | 09:54:45.431 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-09-18 09:54:45.435311 | orchestrator | 09:54:45.435 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=018b4168c0c5e467d8ab7df41f5562d0ef47c1e5] 2025-09-18 09:54:45.439825 | orchestrator | 09:54:45.439 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-09-18 09:54:45.443907 | orchestrator | 09:54:45.443 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=d99c8de7c0ac7ba4dbed03a6de7958c2fedbb241] 2025-09-18 09:54:45.446996 | orchestrator | 09:54:45.446 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 3s [id=f3f02157-3479-476e-b2a3-c621f2183940] 2025-09-18 09:54:45.447779 | orchestrator | 09:54:45.447 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 3s [id=9c9fa6f7-5631-4b7c-8490-02f085d70a52] 2025-09-18 09:54:45.449126 | orchestrator | 09:54:45.449 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-09-18 09:54:46.089733 | orchestrator | 09:54:46.089 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 3s [id=d793f249-b859-4211-aee9-7d27fd7330c6] 2025-09-18 09:54:46.361576 | orchestrator | 09:54:46.361 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 1s [id=a5769c1d-4b0f-4a4a-b7af-f02f362dad90] 2025-09-18 09:54:46.368173 | orchestrator | 09:54:46.367 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-09-18 09:54:48.812707 | orchestrator | 09:54:48.812 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 4s [id=742a5747-f873-4808-a190-7917a84c4500] 2025-09-18 09:54:48.817019 | orchestrator | 09:54:48.816 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 4s [id=05b7d5c7-8ae0-478b-9c11-f5c3a25542ec] 2025-09-18 09:54:48.836233 | orchestrator | 09:54:48.835 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 4s [id=66fc429d-b5e6-4c66-945d-f5e80dd7853a] 2025-09-18 09:54:48.837950 | orchestrator | 09:54:48.837 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 4s [id=d92083cd-8111-41a2-a6b5-4afbc391d177] 2025-09-18 09:54:48.848882 | orchestrator | 09:54:48.848 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 4s [id=4afe00d7-77e4-4bb6-991c-926b9ce2357f] 2025-09-18 09:54:49.207003 | orchestrator | 09:54:49.206 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 4s [id=5594d9e1-f687-4f78-a96a-01f5c1f70135] 2025-09-18 09:54:52.056385 | orchestrator | 09:54:52.056 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 6s [id=fadc78ff-1c0f-4022-862b-c55ca0b2fd48] 2025-09-18 09:54:52.064465 | orchestrator | 09:54:52.062 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-09-18 09:54:52.064529 | orchestrator | 09:54:52.063 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-09-18 09:54:52.064667 | orchestrator | 09:54:52.064 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-09-18 09:54:52.260144 | orchestrator | 09:54:52.259 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 0s [id=37204196-c7f1-48f7-ab4f-6e7a0316909d] 2025-09-18 09:54:52.279957 | orchestrator | 09:54:52.279 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-09-18 09:54:52.280940 | orchestrator | 09:54:52.280 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-09-18 09:54:52.281285 | orchestrator | 09:54:52.281 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-09-18 09:54:52.282344 | orchestrator | 09:54:52.282 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-09-18 09:54:52.282542 | orchestrator | 09:54:52.282 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-09-18 09:54:52.286226 | orchestrator | 09:54:52.286 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-09-18 09:54:52.290896 | orchestrator | 09:54:52.290 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-09-18 09:54:52.291560 | orchestrator | 09:54:52.291 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-09-18 09:54:52.293696 | orchestrator | 09:54:52.292 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 0s [id=4aae5c8c-80b7-4699-b070-b569fa14ed5c] 2025-09-18 09:54:52.300429 | orchestrator | 09:54:52.300 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-09-18 09:54:52.512882 | orchestrator | 09:54:52.512 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 1s [id=e7357a17-c258-4c04-93a2-233df4a286cf] 2025-09-18 09:54:52.528387 | orchestrator | 09:54:52.528 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-09-18 09:54:52.715821 | orchestrator | 09:54:52.715 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 1s [id=d43794c4-46a9-4d03-aa15-a8d9cf04eae7] 2025-09-18 09:54:52.722236 | orchestrator | 09:54:52.722 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-09-18 09:54:52.925611 | orchestrator | 09:54:52.925 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 1s [id=0668e52b-b54b-4a4c-b368-f6505d9c05ef] 2025-09-18 09:54:52.937415 | orchestrator | 09:54:52.937 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-09-18 09:54:52.943340 | orchestrator | 09:54:52.942 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=dc18175c-03b6-4ecb-b263-9c3b795851bd] 2025-09-18 09:54:52.949039 | orchestrator | 09:54:52.948 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-09-18 09:54:53.028960 | orchestrator | 09:54:53.028 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 1s [id=7a0c6963-d022-4f68-b3d7-3bbb99eebdd6] 2025-09-18 09:54:53.035316 | orchestrator | 09:54:53.035 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-09-18 09:54:53.164249 | orchestrator | 09:54:53.163 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 1s [id=eb1646eb-da5d-4231-9281-035ab1810886] 2025-09-18 09:54:53.170245 | orchestrator | 09:54:53.169 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-09-18 09:54:53.184136 | orchestrator | 09:54:53.183 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 1s [id=508c660a-9ddc-4c86-afbe-91a86d140aa0] 2025-09-18 09:54:53.186534 | orchestrator | 09:54:53.186 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 1s [id=3085e031-2f3a-4f84-bee2-942544044320] 2025-09-18 09:54:53.189115 | orchestrator | 09:54:53.188 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 1s [id=04d127dd-2daa-49e8-b093-89d96b7090ff] 2025-09-18 09:54:53.193614 | orchestrator | 09:54:53.193 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-09-18 09:54:53.210517 | orchestrator | 09:54:53.210 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=5ff28348-e3b6-45cb-848b-b32b5b861517] 2025-09-18 09:54:53.229858 | orchestrator | 09:54:53.229 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 0s [id=0d33c808-9866-4995-9032-37edcb97dcaf] 2025-09-18 09:54:53.353826 | orchestrator | 09:54:53.353 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 1s [id=0ab19a52-7861-4e6e-9547-7da18a39d3e7] 2025-09-18 09:54:53.449908 | orchestrator | 09:54:53.449 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=2c9d0281-94ac-4a66-bb34-5b599cff3762] 2025-09-18 09:54:53.688780 | orchestrator | 09:54:53.688 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=40c6ace0-be0b-4ed5-85c1-9bbc5c3d43eb] 2025-09-18 09:54:53.985935 | orchestrator | 09:54:53.985 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=48fc3db0-76fa-4999-803a-d1285a82f38b] 2025-09-18 09:54:54.042218 | orchestrator | 09:54:54.041 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 1s [id=45bc695c-dc46-40d4-9dce-1fab25bbfcc3] 2025-09-18 09:54:54.497336 | orchestrator | 09:54:54.496 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 2s [id=b85d7cba-7787-4b1c-af41-c3e62bbe136a] 2025-09-18 09:54:54.509999 | orchestrator | 09:54:54.509 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-09-18 09:54:54.527071 | orchestrator | 09:54:54.526 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-09-18 09:54:54.534599 | orchestrator | 09:54:54.534 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-09-18 09:54:54.537421 | orchestrator | 09:54:54.537 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-09-18 09:54:54.545337 | orchestrator | 09:54:54.545 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-09-18 09:54:54.545711 | orchestrator | 09:54:54.545 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-09-18 09:54:54.545815 | orchestrator | 09:54:54.545 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-09-18 09:54:55.775813 | orchestrator | 09:54:55.775 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 1s [id=3cee523f-3ba2-41a9-bac2-5a4551c9da0b] 2025-09-18 09:54:55.785399 | orchestrator | 09:54:55.785 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-09-18 09:54:55.790097 | orchestrator | 09:54:55.789 STDOUT terraform: local_file.inventory: Creating... 2025-09-18 09:54:55.793338 | orchestrator | 09:54:55.793 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-09-18 09:54:55.793421 | orchestrator | 09:54:55.793 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=b443f3cf3e05a0c50d62d35c6e77cdc52d38772f] 2025-09-18 09:54:55.797787 | orchestrator | 09:54:55.797 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=b3fe5f348e5b3c4cfaa377c9d90e00f34e90f4e6] 2025-09-18 09:54:56.593256 | orchestrator | 09:54:56.592 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=3cee523f-3ba2-41a9-bac2-5a4551c9da0b] 2025-09-18 09:55:04.536176 | orchestrator | 09:55:04.535 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-09-18 09:55:04.542482 | orchestrator | 09:55:04.542 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-09-18 09:55:04.544561 | orchestrator | 09:55:04.544 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-09-18 09:55:04.553950 | orchestrator | 09:55:04.553 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-09-18 09:55:04.554131 | orchestrator | 09:55:04.553 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-09-18 09:55:04.554257 | orchestrator | 09:55:04.554 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-09-18 09:55:14.536444 | orchestrator | 09:55:14.536 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-09-18 09:55:14.543706 | orchestrator | 09:55:14.543 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-09-18 09:55:14.545006 | orchestrator | 09:55:14.544 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-09-18 09:55:14.555117 | orchestrator | 09:55:14.554 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-09-18 09:55:14.555220 | orchestrator | 09:55:14.555 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-09-18 09:55:14.555417 | orchestrator | 09:55:14.555 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-09-18 09:55:14.965277 | orchestrator | 09:55:14.964 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 20s [id=8a90120b-96ad-4979-a209-e827e1c8f98e] 2025-09-18 09:55:15.033780 | orchestrator | 09:55:15.033 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 20s [id=1e5f3d1c-3df2-4398-9cc1-5164875602e1] 2025-09-18 09:55:15.159459 | orchestrator | 09:55:15.159 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 20s [id=64208e8f-a17b-4ca9-b085-6cdcc36ad8fc] 2025-09-18 09:55:24.549934 | orchestrator | 09:55:24.549 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [30s elapsed] 2025-09-18 09:55:24.556053 | orchestrator | 09:55:24.555 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [30s elapsed] 2025-09-18 09:55:24.556151 | orchestrator | 09:55:24.556 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-09-18 09:55:25.253153 | orchestrator | 09:55:25.252 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 30s [id=1caa3214-96c3-49c9-8aa1-07157f34782f] 2025-09-18 09:55:25.333966 | orchestrator | 09:55:25.333 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 30s [id=19bdd908-c51a-471a-bf48-0e8b22b1df3b] 2025-09-18 09:55:25.636644 | orchestrator | 09:55:25.636 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 31s [id=eea88ba7-c99a-4e89-8b83-22fc238ccafa] 2025-09-18 09:55:25.647713 | orchestrator | 09:55:25.647 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-09-18 09:55:25.668567 | orchestrator | 09:55:25.668 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-09-18 09:55:25.668797 | orchestrator | 09:55:25.668 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-09-18 09:55:25.669895 | orchestrator | 09:55:25.669 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-09-18 09:55:25.671080 | orchestrator | 09:55:25.670 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-09-18 09:55:25.672666 | orchestrator | 09:55:25.672 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=7462537649946175490] 2025-09-18 09:55:25.681575 | orchestrator | 09:55:25.681 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-09-18 09:55:25.682809 | orchestrator | 09:55:25.682 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-09-18 09:55:25.684466 | orchestrator | 09:55:25.684 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-09-18 09:55:25.699865 | orchestrator | 09:55:25.699 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-09-18 09:55:25.700348 | orchestrator | 09:55:25.700 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-09-18 09:55:25.711409 | orchestrator | 09:55:25.711 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-09-18 09:55:29.079476 | orchestrator | 09:55:29.079 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 3s [id=1caa3214-96c3-49c9-8aa1-07157f34782f/00278712-8848-43cc-b367-9df7adc0d1b4] 2025-09-18 09:55:29.105596 | orchestrator | 09:55:29.105 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 3s [id=eea88ba7-c99a-4e89-8b83-22fc238ccafa/a9e5fe38-9aa1-47d1-b292-dbaa7924ce64] 2025-09-18 09:55:29.116690 | orchestrator | 09:55:29.116 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 3s [id=1caa3214-96c3-49c9-8aa1-07157f34782f/f3f02157-3479-476e-b2a3-c621f2183940] 2025-09-18 09:55:29.140255 | orchestrator | 09:55:29.139 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 3s [id=19bdd908-c51a-471a-bf48-0e8b22b1df3b/a69d22c4-e927-4699-a327-d057749b4040] 2025-09-18 09:55:29.141063 | orchestrator | 09:55:29.140 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 3s [id=eea88ba7-c99a-4e89-8b83-22fc238ccafa/56fd191f-3e0c-491f-8cd9-aabd31cc0836] 2025-09-18 09:55:29.166855 | orchestrator | 09:55:29.166 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 3s [id=19bdd908-c51a-471a-bf48-0e8b22b1df3b/e49cb3c6-bfd0-4159-abb8-b26259c9fbe2] 2025-09-18 09:55:35.230516 | orchestrator | 09:55:35.230 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 9s [id=19bdd908-c51a-471a-bf48-0e8b22b1df3b/649a7a14-18b6-4e11-8675-ab8fe85002f2] 2025-09-18 09:55:35.263027 | orchestrator | 09:55:35.262 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 9s [id=eea88ba7-c99a-4e89-8b83-22fc238ccafa/9c9fa6f7-5631-4b7c-8490-02f085d70a52] 2025-09-18 09:55:35.308962 | orchestrator | 09:55:35.308 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 9s [id=1caa3214-96c3-49c9-8aa1-07157f34782f/32515b61-c47f-4019-8995-ef0e516a1d70] 2025-09-18 09:55:35.714864 | orchestrator | 09:55:35.714 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-09-18 09:55:45.715461 | orchestrator | 09:55:45.715 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-09-18 09:55:46.272176 | orchestrator | 09:55:46.271 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 20s [id=3590db0a-4613-4bf4-a96f-d1b98b414ced] 2025-09-18 09:55:46.290717 | orchestrator | 09:55:46.290 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-09-18 09:55:46.290809 | orchestrator | 09:55:46.290 STDOUT terraform: Outputs: 2025-09-18 09:55:46.290836 | orchestrator | 09:55:46.290 STDOUT terraform: manager_address = 2025-09-18 09:55:46.290855 | orchestrator | 09:55:46.290 STDOUT terraform: private_key = 2025-09-18 09:55:46.701276 | orchestrator | ok: Runtime: 0:01:12.418997 2025-09-18 09:55:46.738509 | 2025-09-18 09:55:46.738636 | TASK [Create infrastructure (stable)] 2025-09-18 09:55:47.274418 | orchestrator | skipping: Conditional result was False 2025-09-18 09:55:47.293374 | 2025-09-18 09:55:47.293627 | TASK [Fetch manager address] 2025-09-18 09:55:47.717312 | orchestrator | ok 2025-09-18 09:55:47.726942 | 2025-09-18 09:55:47.727078 | TASK [Set manager_host address] 2025-09-18 09:55:47.802731 | orchestrator | ok 2025-09-18 09:55:47.812861 | 2025-09-18 09:55:47.813004 | LOOP [Update ansible collections] 2025-09-18 09:55:48.649642 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 09:55:48.650099 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-18 09:55:48.650173 | orchestrator | Starting galaxy collection install process 2025-09-18 09:55:48.650224 | orchestrator | Process install dependency map 2025-09-18 09:55:48.650269 | orchestrator | Starting collection install process 2025-09-18 09:55:48.650312 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons' 2025-09-18 09:55:48.650360 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons 2025-09-18 09:55:48.650520 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-09-18 09:55:48.650622 | orchestrator | ok: Item: commons Runtime: 0:00:00.529399 2025-09-18 09:55:49.447804 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 09:55:49.447935 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-09-18 09:55:49.447967 | orchestrator | Starting galaxy collection install process 2025-09-18 09:55:49.447990 | orchestrator | Process install dependency map 2025-09-18 09:55:49.448012 | orchestrator | Starting collection install process 2025-09-18 09:55:49.448032 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services' 2025-09-18 09:55:49.448052 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/services 2025-09-18 09:55:49.448071 | orchestrator | osism.services:999.0.0 was installed successfully 2025-09-18 09:55:49.448101 | orchestrator | ok: Item: services Runtime: 0:00:00.562565 2025-09-18 09:55:49.463315 | 2025-09-18 09:55:49.463450 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-18 09:56:00.015880 | orchestrator | ok 2025-09-18 09:56:00.030891 | 2025-09-18 09:56:00.031019 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-18 09:57:00.068155 | orchestrator | ok 2025-09-18 09:57:00.075606 | 2025-09-18 09:57:00.075711 | TASK [Fetch manager ssh hostkey] 2025-09-18 09:57:01.648357 | orchestrator | Output suppressed because no_log was given 2025-09-18 09:57:01.662725 | 2025-09-18 09:57:01.662926 | TASK [Get ssh keypair from terraform environment] 2025-09-18 09:57:02.197891 | orchestrator | ok: Runtime: 0:00:00.009026 2025-09-18 09:57:02.213900 | 2025-09-18 09:57:02.214149 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-18 09:57:02.251810 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-09-18 09:57:02.261010 | 2025-09-18 09:57:02.261136 | TASK [Run manager part 0] 2025-09-18 09:57:03.095775 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 09:57:03.138947 | orchestrator | 2025-09-18 09:57:03.138993 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-09-18 09:57:03.139000 | orchestrator | 2025-09-18 09:57:03.139012 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-09-18 09:57:04.918336 | orchestrator | ok: [testbed-manager] 2025-09-18 09:57:04.918390 | orchestrator | 2025-09-18 09:57:04.918418 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-18 09:57:04.918431 | orchestrator | 2025-09-18 09:57:04.918443 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 09:57:06.747633 | orchestrator | ok: [testbed-manager] 2025-09-18 09:57:06.747677 | orchestrator | 2025-09-18 09:57:06.747683 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-18 09:57:07.405794 | orchestrator | ok: [testbed-manager] 2025-09-18 09:57:07.405845 | orchestrator | 2025-09-18 09:57:07.405855 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-18 09:57:07.456135 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.456187 | orchestrator | 2025-09-18 09:57:07.456200 | orchestrator | TASK [Update package cache] **************************************************** 2025-09-18 09:57:07.481981 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.482053 | orchestrator | 2025-09-18 09:57:07.482063 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-18 09:57:07.504948 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.504982 | orchestrator | 2025-09-18 09:57:07.504989 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-18 09:57:07.528387 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.528420 | orchestrator | 2025-09-18 09:57:07.528426 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-18 09:57:07.550275 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.550306 | orchestrator | 2025-09-18 09:57:07.550313 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-09-18 09:57:07.573374 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.573409 | orchestrator | 2025-09-18 09:57:07.573416 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-09-18 09:57:07.598671 | orchestrator | skipping: [testbed-manager] 2025-09-18 09:57:07.598706 | orchestrator | 2025-09-18 09:57:07.598713 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-09-18 09:57:08.310995 | orchestrator | changed: [testbed-manager] 2025-09-18 09:57:08.311084 | orchestrator | 2025-09-18 09:57:08.311098 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-09-18 09:59:31.644971 | orchestrator | changed: [testbed-manager] 2025-09-18 09:59:31.645082 | orchestrator | 2025-09-18 09:59:31.645100 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-18 10:00:47.103749 | orchestrator | changed: [testbed-manager] 2025-09-18 10:00:47.103850 | orchestrator | 2025-09-18 10:00:47.103866 | orchestrator | TASK [Install required packages] *********************************************** 2025-09-18 10:01:08.065465 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:08.065575 | orchestrator | 2025-09-18 10:01:08.065604 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-09-18 10:01:16.203626 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:16.203751 | orchestrator | 2025-09-18 10:01:16.203772 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-18 10:01:16.248493 | orchestrator | ok: [testbed-manager] 2025-09-18 10:01:16.248564 | orchestrator | 2025-09-18 10:01:16.248579 | orchestrator | TASK [Get current user] ******************************************************** 2025-09-18 10:01:17.017861 | orchestrator | ok: [testbed-manager] 2025-09-18 10:01:17.017939 | orchestrator | 2025-09-18 10:01:17.017956 | orchestrator | TASK [Create venv directory] *************************************************** 2025-09-18 10:01:17.764317 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:17.764421 | orchestrator | 2025-09-18 10:01:17.764437 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-09-18 10:01:24.001256 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:24.001298 | orchestrator | 2025-09-18 10:01:24.001321 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-09-18 10:01:29.939095 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:29.939136 | orchestrator | 2025-09-18 10:01:29.939145 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-09-18 10:01:32.531997 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:32.532079 | orchestrator | 2025-09-18 10:01:32.532094 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-09-18 10:01:34.230914 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:34.230955 | orchestrator | 2025-09-18 10:01:34.230963 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-09-18 10:01:35.310745 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-18 10:01:35.310933 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-18 10:01:35.310948 | orchestrator | 2025-09-18 10:01:35.310961 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-09-18 10:01:35.353311 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-18 10:01:35.353405 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-18 10:01:35.353419 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-18 10:01:35.353432 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-18 10:01:38.514833 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-09-18 10:01:38.514869 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-09-18 10:01:38.514874 | orchestrator | 2025-09-18 10:01:38.514878 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-09-18 10:01:39.078084 | orchestrator | changed: [testbed-manager] 2025-09-18 10:01:39.078124 | orchestrator | 2025-09-18 10:01:39.078132 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-09-18 10:02:27.838347 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-09-18 10:02:27.838479 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-09-18 10:02:27.838498 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-09-18 10:02:27.838510 | orchestrator | 2025-09-18 10:02:27.838523 | orchestrator | TASK [Install local collections] *********************************************** 2025-09-18 10:02:30.155239 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-09-18 10:02:30.155326 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-09-18 10:02:30.155341 | orchestrator | 2025-09-18 10:02:30.155353 | orchestrator | PLAY [Create operator user] **************************************************** 2025-09-18 10:02:30.155365 | orchestrator | 2025-09-18 10:02:30.155405 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:02:31.588527 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:31.588689 | orchestrator | 2025-09-18 10:02:31.588711 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-18 10:02:31.632870 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:31.632935 | orchestrator | 2025-09-18 10:02:31.632943 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-18 10:02:31.694428 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:31.694473 | orchestrator | 2025-09-18 10:02:31.694479 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-18 10:02:32.521121 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:32.521222 | orchestrator | 2025-09-18 10:02:32.521238 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-18 10:02:33.249427 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:33.249517 | orchestrator | 2025-09-18 10:02:33.249534 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-18 10:02:34.642366 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-09-18 10:02:34.642480 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-09-18 10:02:34.642496 | orchestrator | 2025-09-18 10:02:34.642522 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-18 10:02:36.016611 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:36.016704 | orchestrator | 2025-09-18 10:02:36.016717 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-18 10:02:37.596886 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:02:37.596979 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-09-18 10:02:37.596994 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:02:37.597006 | orchestrator | 2025-09-18 10:02:37.597018 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-18 10:02:37.653190 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:37.653246 | orchestrator | 2025-09-18 10:02:37.653253 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-18 10:02:38.191107 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:38.191147 | orchestrator | 2025-09-18 10:02:38.191156 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-18 10:02:38.257534 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:38.257604 | orchestrator | 2025-09-18 10:02:38.257618 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-18 10:02:39.046224 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:02:39.046282 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:39.046290 | orchestrator | 2025-09-18 10:02:39.046296 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-18 10:02:39.085871 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:39.085915 | orchestrator | 2025-09-18 10:02:39.085926 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-18 10:02:39.116105 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:39.116147 | orchestrator | 2025-09-18 10:02:39.116158 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-18 10:02:39.141726 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:39.141757 | orchestrator | 2025-09-18 10:02:39.141763 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-18 10:02:39.183096 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:39.183133 | orchestrator | 2025-09-18 10:02:39.183142 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-18 10:02:39.883333 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:39.883458 | orchestrator | 2025-09-18 10:02:39.883475 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-09-18 10:02:39.883487 | orchestrator | 2025-09-18 10:02:39.883499 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:02:41.281981 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:41.282103 | orchestrator | 2025-09-18 10:02:41.282121 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-09-18 10:02:42.237417 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:42.237946 | orchestrator | 2025-09-18 10:02:42.237961 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:02:42.237968 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=13 rescued=0 ignored=0 2025-09-18 10:02:42.237972 | orchestrator | 2025-09-18 10:02:42.494962 | orchestrator | ok: Runtime: 0:05:39.771890 2025-09-18 10:02:42.503991 | 2025-09-18 10:02:42.504087 | TASK [Point out that the log in on the manager is now possible] 2025-09-18 10:02:42.547006 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-09-18 10:02:42.561928 | 2025-09-18 10:02:42.562057 | TASK [Point out that the following task takes some time and does not give any output] 2025-09-18 10:02:42.579181 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-09-18 10:02:42.585832 | 2025-09-18 10:02:42.585916 | TASK [Run manager part 1 + 2] 2025-09-18 10:02:43.408710 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-09-18 10:02:43.461572 | orchestrator | 2025-09-18 10:02:43.461621 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-09-18 10:02:43.461628 | orchestrator | 2025-09-18 10:02:43.461641 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:02:46.301522 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:46.301572 | orchestrator | 2025-09-18 10:02:46.301593 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-09-18 10:02:46.335229 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:46.335271 | orchestrator | 2025-09-18 10:02:46.335280 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-09-18 10:02:46.371501 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:46.371544 | orchestrator | 2025-09-18 10:02:46.371552 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 10:02:46.407155 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:46.407189 | orchestrator | 2025-09-18 10:02:46.407197 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 10:02:46.465193 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:46.465236 | orchestrator | 2025-09-18 10:02:46.465246 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 10:02:46.519060 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:46.519098 | orchestrator | 2025-09-18 10:02:46.519108 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 10:02:46.558312 | orchestrator | included: /home/zuul-testbed05/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-09-18 10:02:46.558338 | orchestrator | 2025-09-18 10:02:46.558343 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 10:02:47.230462 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:47.230505 | orchestrator | 2025-09-18 10:02:47.230515 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 10:02:47.277929 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:02:47.278187 | orchestrator | 2025-09-18 10:02:47.278201 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 10:02:48.563462 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:48.563538 | orchestrator | 2025-09-18 10:02:48.563548 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 10:02:49.108066 | orchestrator | ok: [testbed-manager] 2025-09-18 10:02:49.108109 | orchestrator | 2025-09-18 10:02:49.108118 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 10:02:50.239041 | orchestrator | changed: [testbed-manager] 2025-09-18 10:02:50.239110 | orchestrator | 2025-09-18 10:02:50.239126 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 10:03:07.273919 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:07.273991 | orchestrator | 2025-09-18 10:03:07.274007 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-09-18 10:03:07.912946 | orchestrator | ok: [testbed-manager] 2025-09-18 10:03:07.913014 | orchestrator | 2025-09-18 10:03:07.913030 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-09-18 10:03:07.958320 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:03:07.958884 | orchestrator | 2025-09-18 10:03:07.958906 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-09-18 10:03:08.948646 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:08.948714 | orchestrator | 2025-09-18 10:03:08.948730 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-09-18 10:03:09.901507 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:09.901574 | orchestrator | 2025-09-18 10:03:09.901590 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-09-18 10:03:10.470227 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:10.470289 | orchestrator | 2025-09-18 10:03:10.470303 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-09-18 10:03:10.507368 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-09-18 10:03:10.507489 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-09-18 10:03:10.507506 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-09-18 10:03:10.507518 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-09-18 10:03:12.368065 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:12.368147 | orchestrator | 2025-09-18 10:03:12.368163 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-09-18 10:03:21.103128 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-09-18 10:03:21.103245 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-09-18 10:03:21.103264 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-09-18 10:03:21.103277 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-09-18 10:03:21.103298 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-09-18 10:03:21.103310 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-09-18 10:03:21.103322 | orchestrator | 2025-09-18 10:03:21.103334 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-09-18 10:03:22.155385 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:22.155490 | orchestrator | 2025-09-18 10:03:22.155507 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-09-18 10:03:22.200294 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:03:22.200330 | orchestrator | 2025-09-18 10:03:22.200338 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-09-18 10:03:25.304693 | orchestrator | changed: [testbed-manager] 2025-09-18 10:03:25.304745 | orchestrator | 2025-09-18 10:03:25.304755 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-09-18 10:03:25.344228 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:03:25.344266 | orchestrator | 2025-09-18 10:03:25.344273 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-09-18 10:04:54.380332 | orchestrator | changed: [testbed-manager] 2025-09-18 10:04:54.380371 | orchestrator | 2025-09-18 10:04:54.380379 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 10:04:55.484804 | orchestrator | ok: [testbed-manager] 2025-09-18 10:04:55.484870 | orchestrator | 2025-09-18 10:04:55.484888 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:04:55.484903 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-09-18 10:04:55.484916 | orchestrator | 2025-09-18 10:04:55.701748 | orchestrator | ok: Runtime: 0:02:12.693496 2025-09-18 10:04:55.732821 | 2025-09-18 10:04:55.733064 | TASK [Reboot manager] 2025-09-18 10:04:57.278829 | orchestrator | ok: Runtime: 0:00:00.949028 2025-09-18 10:04:57.286861 | 2025-09-18 10:04:57.286949 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-09-18 10:05:12.132599 | orchestrator | ok 2025-09-18 10:05:12.142400 | 2025-09-18 10:05:12.142544 | TASK [Wait a little longer for the manager so that everything is ready] 2025-09-18 10:06:12.179036 | orchestrator | ok 2025-09-18 10:06:12.189296 | 2025-09-18 10:06:12.189444 | TASK [Deploy manager + bootstrap nodes] 2025-09-18 10:06:14.602328 | orchestrator | 2025-09-18 10:06:14.602577 | orchestrator | # DEPLOY MANAGER 2025-09-18 10:06:14.602606 | orchestrator | 2025-09-18 10:06:14.602621 | orchestrator | + set -e 2025-09-18 10:06:14.602635 | orchestrator | + echo 2025-09-18 10:06:14.602648 | orchestrator | + echo '# DEPLOY MANAGER' 2025-09-18 10:06:14.602666 | orchestrator | + echo 2025-09-18 10:06:14.602709 | orchestrator | + cat /opt/manager-vars.sh 2025-09-18 10:06:14.605408 | orchestrator | export NUMBER_OF_NODES=6 2025-09-18 10:06:14.605457 | orchestrator | 2025-09-18 10:06:14.605470 | orchestrator | export CEPH_VERSION=reef 2025-09-18 10:06:14.605484 | orchestrator | export CONFIGURATION_VERSION=main 2025-09-18 10:06:14.605497 | orchestrator | export MANAGER_VERSION=latest 2025-09-18 10:06:14.605519 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-09-18 10:06:14.605530 | orchestrator | 2025-09-18 10:06:14.605548 | orchestrator | export ARA=false 2025-09-18 10:06:14.605560 | orchestrator | export DEPLOY_MODE=manager 2025-09-18 10:06:14.605607 | orchestrator | export TEMPEST=false 2025-09-18 10:06:14.605619 | orchestrator | export IS_ZUUL=true 2025-09-18 10:06:14.605631 | orchestrator | 2025-09-18 10:06:14.605649 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:06:14.605661 | orchestrator | export EXTERNAL_API=false 2025-09-18 10:06:14.605672 | orchestrator | 2025-09-18 10:06:14.605683 | orchestrator | export IMAGE_USER=ubuntu 2025-09-18 10:06:14.605698 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-09-18 10:06:14.605708 | orchestrator | 2025-09-18 10:06:14.605719 | orchestrator | export CEPH_STACK=ceph-ansible 2025-09-18 10:06:14.605736 | orchestrator | 2025-09-18 10:06:14.605748 | orchestrator | + echo 2025-09-18 10:06:14.605760 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 10:06:14.606589 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 10:06:14.606610 | orchestrator | ++ INTERACTIVE=false 2025-09-18 10:06:14.606623 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 10:06:14.606635 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 10:06:14.606797 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 10:06:14.606814 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 10:06:14.606826 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 10:06:14.606864 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 10:06:14.606877 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 10:06:14.606888 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 10:06:14.606899 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 10:06:14.606939 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 10:06:14.606963 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 10:06:14.606974 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 10:06:14.606994 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 10:06:14.607026 | orchestrator | ++ export ARA=false 2025-09-18 10:06:14.607039 | orchestrator | ++ ARA=false 2025-09-18 10:06:14.607054 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 10:06:14.607086 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 10:06:14.607148 | orchestrator | ++ export TEMPEST=false 2025-09-18 10:06:14.607160 | orchestrator | ++ TEMPEST=false 2025-09-18 10:06:14.607171 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 10:06:14.607181 | orchestrator | ++ IS_ZUUL=true 2025-09-18 10:06:14.607192 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:06:14.607203 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:06:14.607213 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 10:06:14.607224 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 10:06:14.607234 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 10:06:14.607245 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 10:06:14.607256 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 10:06:14.607267 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 10:06:14.607283 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 10:06:14.607294 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 10:06:14.607305 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-09-18 10:06:14.667132 | orchestrator | + docker version 2025-09-18 10:06:14.917733 | orchestrator | Client: Docker Engine - Community 2025-09-18 10:06:14.917827 | orchestrator | Version: 27.5.1 2025-09-18 10:06:14.917844 | orchestrator | API version: 1.47 2025-09-18 10:06:14.917856 | orchestrator | Go version: go1.22.11 2025-09-18 10:06:14.917866 | orchestrator | Git commit: 9f9e405 2025-09-18 10:06:14.917877 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-18 10:06:14.917890 | orchestrator | OS/Arch: linux/amd64 2025-09-18 10:06:14.917900 | orchestrator | Context: default 2025-09-18 10:06:14.917911 | orchestrator | 2025-09-18 10:06:14.917922 | orchestrator | Server: Docker Engine - Community 2025-09-18 10:06:14.917933 | orchestrator | Engine: 2025-09-18 10:06:14.917945 | orchestrator | Version: 27.5.1 2025-09-18 10:06:14.917956 | orchestrator | API version: 1.47 (minimum version 1.24) 2025-09-18 10:06:14.917998 | orchestrator | Go version: go1.22.11 2025-09-18 10:06:14.918010 | orchestrator | Git commit: 4c9b3b0 2025-09-18 10:06:14.918073 | orchestrator | Built: Wed Jan 22 13:41:48 2025 2025-09-18 10:06:14.918084 | orchestrator | OS/Arch: linux/amd64 2025-09-18 10:06:14.918095 | orchestrator | Experimental: false 2025-09-18 10:06:14.918107 | orchestrator | containerd: 2025-09-18 10:06:14.918118 | orchestrator | Version: 1.7.27 2025-09-18 10:06:14.918129 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-09-18 10:06:14.918140 | orchestrator | runc: 2025-09-18 10:06:14.918150 | orchestrator | Version: 1.2.5 2025-09-18 10:06:14.918161 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-09-18 10:06:14.918172 | orchestrator | docker-init: 2025-09-18 10:06:14.918183 | orchestrator | Version: 0.19.0 2025-09-18 10:06:14.918194 | orchestrator | GitCommit: de40ad0 2025-09-18 10:06:14.921222 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-09-18 10:06:14.930331 | orchestrator | + set -e 2025-09-18 10:06:14.930418 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 10:06:14.930456 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 10:06:14.930467 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 10:06:14.930478 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 10:06:14.930524 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 10:06:14.930536 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 10:06:14.930547 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 10:06:14.930558 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 10:06:14.930569 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 10:06:14.930580 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 10:06:14.930590 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 10:06:14.930601 | orchestrator | ++ export ARA=false 2025-09-18 10:06:14.930612 | orchestrator | ++ ARA=false 2025-09-18 10:06:14.930623 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 10:06:14.930634 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 10:06:14.930644 | orchestrator | ++ export TEMPEST=false 2025-09-18 10:06:14.930655 | orchestrator | ++ TEMPEST=false 2025-09-18 10:06:14.930705 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 10:06:14.930719 | orchestrator | ++ IS_ZUUL=true 2025-09-18 10:06:14.930729 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:06:14.930740 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:06:14.930783 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 10:06:14.930795 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 10:06:14.930806 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 10:06:14.930816 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 10:06:14.930827 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 10:06:14.930838 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 10:06:14.930849 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 10:06:14.930859 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 10:06:14.930870 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 10:06:14.930881 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 10:06:14.930891 | orchestrator | ++ INTERACTIVE=false 2025-09-18 10:06:14.930902 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 10:06:14.930917 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 10:06:14.930932 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 10:06:14.930943 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 10:06:14.930954 | orchestrator | + /opt/configuration/scripts/set-ceph-version.sh reef 2025-09-18 10:06:14.936886 | orchestrator | + set -e 2025-09-18 10:06:14.936908 | orchestrator | + VERSION=reef 2025-09-18 10:06:14.937997 | orchestrator | ++ grep '^ceph_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-18 10:06:14.943835 | orchestrator | + [[ -n ceph_version: reef ]] 2025-09-18 10:06:14.943870 | orchestrator | + sed -i 's/ceph_version: .*/ceph_version: reef/g' /opt/configuration/environments/manager/configuration.yml 2025-09-18 10:06:14.950136 | orchestrator | + /opt/configuration/scripts/set-openstack-version.sh 2024.2 2025-09-18 10:06:14.955717 | orchestrator | + set -e 2025-09-18 10:06:14.955750 | orchestrator | + VERSION=2024.2 2025-09-18 10:06:14.956756 | orchestrator | ++ grep '^openstack_version:' /opt/configuration/environments/manager/configuration.yml 2025-09-18 10:06:14.960485 | orchestrator | + [[ -n openstack_version: 2024.2 ]] 2025-09-18 10:06:14.960518 | orchestrator | + sed -i 's/openstack_version: .*/openstack_version: 2024.2/g' /opt/configuration/environments/manager/configuration.yml 2025-09-18 10:06:14.965748 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-09-18 10:06:14.966797 | orchestrator | ++ semver latest 7.0.0 2025-09-18 10:06:15.029293 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 10:06:15.029357 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 10:06:15.029370 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-09-18 10:06:15.029382 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-09-18 10:06:15.124124 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-18 10:06:15.125730 | orchestrator | + source /opt/venv/bin/activate 2025-09-18 10:06:15.126953 | orchestrator | ++ deactivate nondestructive 2025-09-18 10:06:15.126966 | orchestrator | ++ '[' -n '' ']' 2025-09-18 10:06:15.126973 | orchestrator | ++ '[' -n '' ']' 2025-09-18 10:06:15.126979 | orchestrator | ++ hash -r 2025-09-18 10:06:15.126988 | orchestrator | ++ '[' -n '' ']' 2025-09-18 10:06:15.126994 | orchestrator | ++ unset VIRTUAL_ENV 2025-09-18 10:06:15.127119 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-09-18 10:06:15.127136 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-09-18 10:06:15.127145 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-09-18 10:06:15.127152 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-09-18 10:06:15.127204 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-09-18 10:06:15.127211 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-09-18 10:06:15.127217 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 10:06:15.127334 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 10:06:15.127343 | orchestrator | ++ export PATH 2025-09-18 10:06:15.127492 | orchestrator | ++ '[' -n '' ']' 2025-09-18 10:06:15.127500 | orchestrator | ++ '[' -z '' ']' 2025-09-18 10:06:15.127506 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-09-18 10:06:15.127511 | orchestrator | ++ PS1='(venv) ' 2025-09-18 10:06:15.127517 | orchestrator | ++ export PS1 2025-09-18 10:06:15.127522 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-09-18 10:06:15.127528 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-09-18 10:06:15.127536 | orchestrator | ++ hash -r 2025-09-18 10:06:15.127652 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-09-18 10:06:16.372362 | orchestrator | 2025-09-18 10:06:16.372513 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-09-18 10:06:16.372532 | orchestrator | 2025-09-18 10:06:16.372545 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 10:06:16.932765 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:16.932865 | orchestrator | 2025-09-18 10:06:16.932880 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-18 10:06:17.861904 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:17.861988 | orchestrator | 2025-09-18 10:06:17.862003 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-09-18 10:06:17.862055 | orchestrator | 2025-09-18 10:06:17.862068 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:06:20.112179 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:20.112299 | orchestrator | 2025-09-18 10:06:20.112316 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-09-18 10:06:20.157691 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:20.157776 | orchestrator | 2025-09-18 10:06:20.157789 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-09-18 10:06:20.619276 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:20.619376 | orchestrator | 2025-09-18 10:06:20.619392 | orchestrator | TASK [Add netbox_enable parameter] ********************************************* 2025-09-18 10:06:20.662679 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:20.662746 | orchestrator | 2025-09-18 10:06:20.662762 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-09-18 10:06:21.009670 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:21.009771 | orchestrator | 2025-09-18 10:06:21.009794 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-09-18 10:06:21.063927 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:21.063999 | orchestrator | 2025-09-18 10:06:21.064015 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-09-18 10:06:21.397399 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:21.397560 | orchestrator | 2025-09-18 10:06:21.397580 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-09-18 10:06:21.524123 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:21.524217 | orchestrator | 2025-09-18 10:06:21.524231 | orchestrator | PLAY [Apply role traefik] ****************************************************** 2025-09-18 10:06:21.524244 | orchestrator | 2025-09-18 10:06:21.524257 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:06:23.217674 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:23.217779 | orchestrator | 2025-09-18 10:06:23.217797 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-09-18 10:06:23.314709 | orchestrator | included: osism.services.traefik for testbed-manager 2025-09-18 10:06:23.314783 | orchestrator | 2025-09-18 10:06:23.314792 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-09-18 10:06:23.367866 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-09-18 10:06:23.367933 | orchestrator | 2025-09-18 10:06:23.367944 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-09-18 10:06:24.437320 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-09-18 10:06:24.437418 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-09-18 10:06:24.437464 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-09-18 10:06:24.437477 | orchestrator | 2025-09-18 10:06:24.437489 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-09-18 10:06:26.215821 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-09-18 10:06:26.215928 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-09-18 10:06:26.215945 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-09-18 10:06:26.215958 | orchestrator | 2025-09-18 10:06:26.215970 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-09-18 10:06:26.862268 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:06:26.862357 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:26.862372 | orchestrator | 2025-09-18 10:06:26.862384 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-09-18 10:06:27.509641 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:06:27.509729 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:27.509743 | orchestrator | 2025-09-18 10:06:27.509754 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-09-18 10:06:27.556242 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:27.556329 | orchestrator | 2025-09-18 10:06:27.556347 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-09-18 10:06:27.921788 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:27.921888 | orchestrator | 2025-09-18 10:06:27.921906 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-09-18 10:06:28.003527 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-09-18 10:06:28.003611 | orchestrator | 2025-09-18 10:06:28.003624 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-09-18 10:06:29.004765 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:29.004870 | orchestrator | 2025-09-18 10:06:29.004887 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-09-18 10:06:29.807369 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:29.807507 | orchestrator | 2025-09-18 10:06:29.807526 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-09-18 10:06:43.636086 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:43.636195 | orchestrator | 2025-09-18 10:06:43.636212 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-09-18 10:06:43.676972 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:43.677046 | orchestrator | 2025-09-18 10:06:43.677060 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-09-18 10:06:43.677073 | orchestrator | 2025-09-18 10:06:43.677084 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:06:45.397507 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:45.397615 | orchestrator | 2025-09-18 10:06:45.397665 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-09-18 10:06:45.501762 | orchestrator | included: osism.services.manager for testbed-manager 2025-09-18 10:06:45.501860 | orchestrator | 2025-09-18 10:06:45.501875 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-09-18 10:06:45.570850 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:06:45.570942 | orchestrator | 2025-09-18 10:06:45.570956 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-09-18 10:06:47.982823 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:47.982919 | orchestrator | 2025-09-18 10:06:47.982934 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-09-18 10:06:48.038178 | orchestrator | ok: [testbed-manager] 2025-09-18 10:06:48.038249 | orchestrator | 2025-09-18 10:06:48.038265 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-09-18 10:06:48.167501 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-09-18 10:06:48.167571 | orchestrator | 2025-09-18 10:06:48.167583 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-09-18 10:06:51.019259 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-09-18 10:06:51.019361 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-09-18 10:06:51.019376 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-09-18 10:06:51.019389 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-09-18 10:06:51.019400 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-09-18 10:06:51.019412 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-09-18 10:06:51.019423 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-09-18 10:06:51.019489 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-09-18 10:06:51.019502 | orchestrator | 2025-09-18 10:06:51.019514 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-09-18 10:06:51.653088 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:51.653177 | orchestrator | 2025-09-18 10:06:51.653192 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-09-18 10:06:52.278779 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:52.278897 | orchestrator | 2025-09-18 10:06:52.278925 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-09-18 10:06:52.360668 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-09-18 10:06:52.360745 | orchestrator | 2025-09-18 10:06:52.360764 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-09-18 10:06:53.585766 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-09-18 10:06:53.585891 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-09-18 10:06:53.585916 | orchestrator | 2025-09-18 10:06:53.585939 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-09-18 10:06:54.211290 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:54.211391 | orchestrator | 2025-09-18 10:06:54.211407 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-09-18 10:06:54.263743 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:54.263778 | orchestrator | 2025-09-18 10:06:54.263791 | orchestrator | TASK [osism.services.manager : Include frontend config tasks] ****************** 2025-09-18 10:06:54.342598 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-frontend.yml for testbed-manager 2025-09-18 10:06:54.342669 | orchestrator | 2025-09-18 10:06:54.342683 | orchestrator | TASK [osism.services.manager : Copy frontend environment file] ***************** 2025-09-18 10:06:54.964201 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:54.964308 | orchestrator | 2025-09-18 10:06:54.964324 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-09-18 10:06:55.037507 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-09-18 10:06:55.037619 | orchestrator | 2025-09-18 10:06:55.037634 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-09-18 10:06:56.326727 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:06:56.326827 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:06:56.326842 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:56.326854 | orchestrator | 2025-09-18 10:06:56.326866 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-09-18 10:06:56.929222 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:56.929309 | orchestrator | 2025-09-18 10:06:56.929325 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-09-18 10:06:56.980983 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:06:56.981023 | orchestrator | 2025-09-18 10:06:56.981035 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-09-18 10:06:57.080370 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-09-18 10:06:57.080500 | orchestrator | 2025-09-18 10:06:57.080513 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-09-18 10:06:57.567360 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:57.567517 | orchestrator | 2025-09-18 10:06:57.567535 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-09-18 10:06:57.972811 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:57.972897 | orchestrator | 2025-09-18 10:06:57.972911 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-09-18 10:06:59.172912 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-09-18 10:06:59.173001 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-09-18 10:06:59.173014 | orchestrator | 2025-09-18 10:06:59.173027 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-09-18 10:06:59.817186 | orchestrator | changed: [testbed-manager] 2025-09-18 10:06:59.817285 | orchestrator | 2025-09-18 10:06:59.817303 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-09-18 10:07:00.220868 | orchestrator | ok: [testbed-manager] 2025-09-18 10:07:00.220963 | orchestrator | 2025-09-18 10:07:00.220976 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-09-18 10:07:00.573628 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:00.573726 | orchestrator | 2025-09-18 10:07:00.573742 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-09-18 10:07:00.623258 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:07:00.623339 | orchestrator | 2025-09-18 10:07:00.623354 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-09-18 10:07:00.699376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-09-18 10:07:00.699515 | orchestrator | 2025-09-18 10:07:00.699533 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-09-18 10:07:00.742616 | orchestrator | ok: [testbed-manager] 2025-09-18 10:07:00.742678 | orchestrator | 2025-09-18 10:07:00.742692 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-09-18 10:07:02.756682 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-09-18 10:07:02.756790 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-09-18 10:07:02.756806 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-09-18 10:07:02.756818 | orchestrator | 2025-09-18 10:07:02.756831 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-09-18 10:07:03.460880 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:03.460980 | orchestrator | 2025-09-18 10:07:03.461001 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-09-18 10:07:04.188715 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:04.188816 | orchestrator | 2025-09-18 10:07:04.188833 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-09-18 10:07:04.900331 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:04.900476 | orchestrator | 2025-09-18 10:07:04.900496 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-09-18 10:07:05.003655 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-09-18 10:07:05.003732 | orchestrator | 2025-09-18 10:07:05.003748 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-09-18 10:07:05.048847 | orchestrator | ok: [testbed-manager] 2025-09-18 10:07:05.048904 | orchestrator | 2025-09-18 10:07:05.048918 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-09-18 10:07:05.756982 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-09-18 10:07:05.757074 | orchestrator | 2025-09-18 10:07:05.757086 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-09-18 10:07:05.847219 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-09-18 10:07:05.847309 | orchestrator | 2025-09-18 10:07:05.847323 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-09-18 10:07:06.542342 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:06.542493 | orchestrator | 2025-09-18 10:07:06.542515 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-09-18 10:07:07.125629 | orchestrator | ok: [testbed-manager] 2025-09-18 10:07:07.125709 | orchestrator | 2025-09-18 10:07:07.125718 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-09-18 10:07:07.173717 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:07:07.173770 | orchestrator | 2025-09-18 10:07:07.173779 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-09-18 10:07:07.241571 | orchestrator | ok: [testbed-manager] 2025-09-18 10:07:07.241640 | orchestrator | 2025-09-18 10:07:07.241649 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-09-18 10:07:08.054718 | orchestrator | changed: [testbed-manager] 2025-09-18 10:07:08.054823 | orchestrator | 2025-09-18 10:07:08.054839 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-09-18 10:08:13.768183 | orchestrator | changed: [testbed-manager] 2025-09-18 10:08:13.768305 | orchestrator | 2025-09-18 10:08:13.768325 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-09-18 10:08:14.733656 | orchestrator | ok: [testbed-manager] 2025-09-18 10:08:14.733758 | orchestrator | 2025-09-18 10:08:14.733775 | orchestrator | TASK [osism.services.manager : Do a manual start of the manager service] ******* 2025-09-18 10:08:14.790746 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:08:14.790832 | orchestrator | 2025-09-18 10:08:14.790848 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-09-18 10:08:20.774080 | orchestrator | changed: [testbed-manager] 2025-09-18 10:08:20.774188 | orchestrator | 2025-09-18 10:08:20.774198 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-09-18 10:08:20.828157 | orchestrator | ok: [testbed-manager] 2025-09-18 10:08:20.828231 | orchestrator | 2025-09-18 10:08:20.828243 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-18 10:08:20.828253 | orchestrator | 2025-09-18 10:08:20.828259 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-09-18 10:08:20.876702 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:08:20.876746 | orchestrator | 2025-09-18 10:08:20.876752 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-09-18 10:09:20.930814 | orchestrator | Pausing for 60 seconds 2025-09-18 10:09:20.930933 | orchestrator | changed: [testbed-manager] 2025-09-18 10:09:20.930951 | orchestrator | 2025-09-18 10:09:20.930965 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-09-18 10:09:26.524543 | orchestrator | changed: [testbed-manager] 2025-09-18 10:09:26.524640 | orchestrator | 2025-09-18 10:09:26.524658 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-09-18 10:10:08.143392 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-09-18 10:10:08.143459 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-09-18 10:10:08.143467 | orchestrator | changed: [testbed-manager] 2025-09-18 10:10:08.143489 | orchestrator | 2025-09-18 10:10:08.143496 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-09-18 10:10:17.947573 | orchestrator | changed: [testbed-manager] 2025-09-18 10:10:17.947669 | orchestrator | 2025-09-18 10:10:17.947688 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-09-18 10:10:18.020664 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-09-18 10:10:18.020748 | orchestrator | 2025-09-18 10:10:18.020762 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-09-18 10:10:18.020775 | orchestrator | 2025-09-18 10:10:18.020786 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-09-18 10:10:18.070365 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:10:18.070447 | orchestrator | 2025-09-18 10:10:18.070460 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:10:18.070473 | orchestrator | testbed-manager : ok=66 changed=36 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-09-18 10:10:18.070485 | orchestrator | 2025-09-18 10:10:18.160806 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-09-18 10:10:18.160885 | orchestrator | + deactivate 2025-09-18 10:10:18.160899 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-09-18 10:10:18.160913 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-09-18 10:10:18.160923 | orchestrator | + export PATH 2025-09-18 10:10:18.160935 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-09-18 10:10:18.160946 | orchestrator | + '[' -n '' ']' 2025-09-18 10:10:18.160957 | orchestrator | + hash -r 2025-09-18 10:10:18.160988 | orchestrator | + '[' -n '' ']' 2025-09-18 10:10:18.160999 | orchestrator | + unset VIRTUAL_ENV 2025-09-18 10:10:18.161010 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-09-18 10:10:18.161021 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-09-18 10:10:18.161032 | orchestrator | + unset -f deactivate 2025-09-18 10:10:18.161043 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-09-18 10:10:18.166850 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-18 10:10:18.166880 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-18 10:10:18.166891 | orchestrator | + local max_attempts=60 2025-09-18 10:10:18.166903 | orchestrator | + local name=ceph-ansible 2025-09-18 10:10:18.166913 | orchestrator | + local attempt_num=1 2025-09-18 10:10:18.167747 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:10:18.202187 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:10:18.202267 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-18 10:10:18.202279 | orchestrator | + local max_attempts=60 2025-09-18 10:10:18.202290 | orchestrator | + local name=kolla-ansible 2025-09-18 10:10:18.202301 | orchestrator | + local attempt_num=1 2025-09-18 10:10:18.202734 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-18 10:10:18.233323 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:10:18.233449 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-18 10:10:18.233462 | orchestrator | + local max_attempts=60 2025-09-18 10:10:18.233475 | orchestrator | + local name=osism-ansible 2025-09-18 10:10:18.233486 | orchestrator | + local attempt_num=1 2025-09-18 10:10:18.234245 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-18 10:10:18.273801 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:10:18.273889 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-18 10:10:18.273903 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-18 10:10:18.952475 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-09-18 10:10:19.150672 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-09-18 10:10:19.150770 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:reef "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150786 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:2024.2 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150821 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-09-18 10:10:19.150835 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.3 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-09-18 10:10:19.150854 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" beat About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150864 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" flower About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150874 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:latest "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 52 seconds (healthy) 2025-09-18 10:10:19.150884 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" listener About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150893 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.8.3 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-09-18 10:10:19.150903 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:latest "/sbin/tini -- osism…" openstack About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150913 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.5-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-09-18 10:10:19.150922 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:latest "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150932 | orchestrator | osism-frontend registry.osism.tech/osism/osism-frontend:latest "docker-entrypoint.s…" frontend About a minute ago Up About a minute 192.168.16.5:3000->3000/tcp 2025-09-18 10:10:19.150941 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:latest "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.150951 | orchestrator | osismclient registry.osism.tech/osism/osism:latest "/sbin/tini -- sleep…" osismclient About a minute ago Up About a minute (healthy) 2025-09-18 10:10:19.156666 | orchestrator | ++ semver latest 7.0.0 2025-09-18 10:10:19.203638 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 10:10:19.203744 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 10:10:19.203765 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-09-18 10:10:19.208004 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-09-18 10:10:31.363043 | orchestrator | 2025-09-18 10:10:31 | INFO  | Task 25e5fa2f-2c78-4de2-b202-2733bba89aea (resolvconf) was prepared for execution. 2025-09-18 10:10:31.363149 | orchestrator | 2025-09-18 10:10:31 | INFO  | It takes a moment until task 25e5fa2f-2c78-4de2-b202-2733bba89aea (resolvconf) has been started and output is visible here. 2025-09-18 10:10:45.611083 | orchestrator | 2025-09-18 10:10:45.611218 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-09-18 10:10:45.611234 | orchestrator | 2025-09-18 10:10:45.611246 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:10:45.611287 | orchestrator | Thursday 18 September 2025 10:10:35 +0000 (0:00:00.147) 0:00:00.147 **** 2025-09-18 10:10:45.611299 | orchestrator | ok: [testbed-manager] 2025-09-18 10:10:45.611312 | orchestrator | 2025-09-18 10:10:45.611323 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-18 10:10:45.611363 | orchestrator | Thursday 18 September 2025 10:10:38 +0000 (0:00:03.736) 0:00:03.884 **** 2025-09-18 10:10:45.611374 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:10:45.611386 | orchestrator | 2025-09-18 10:10:45.611396 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-18 10:10:45.611407 | orchestrator | Thursday 18 September 2025 10:10:38 +0000 (0:00:00.066) 0:00:03.950 **** 2025-09-18 10:10:45.611418 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-09-18 10:10:45.611430 | orchestrator | 2025-09-18 10:10:45.611442 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-18 10:10:45.611452 | orchestrator | Thursday 18 September 2025 10:10:39 +0000 (0:00:00.074) 0:00:04.025 **** 2025-09-18 10:10:45.611463 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:10:45.611474 | orchestrator | 2025-09-18 10:10:45.611485 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-18 10:10:45.611495 | orchestrator | Thursday 18 September 2025 10:10:39 +0000 (0:00:00.064) 0:00:04.090 **** 2025-09-18 10:10:45.611506 | orchestrator | ok: [testbed-manager] 2025-09-18 10:10:45.611516 | orchestrator | 2025-09-18 10:10:45.611527 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-18 10:10:45.611538 | orchestrator | Thursday 18 September 2025 10:10:40 +0000 (0:00:01.024) 0:00:05.114 **** 2025-09-18 10:10:45.611548 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:10:45.611559 | orchestrator | 2025-09-18 10:10:45.611569 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-18 10:10:45.611580 | orchestrator | Thursday 18 September 2025 10:10:40 +0000 (0:00:00.061) 0:00:05.176 **** 2025-09-18 10:10:45.611591 | orchestrator | ok: [testbed-manager] 2025-09-18 10:10:45.611601 | orchestrator | 2025-09-18 10:10:45.611612 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-18 10:10:45.611622 | orchestrator | Thursday 18 September 2025 10:10:41 +0000 (0:00:01.491) 0:00:06.668 **** 2025-09-18 10:10:45.611633 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:10:45.611643 | orchestrator | 2025-09-18 10:10:45.611654 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-18 10:10:45.611666 | orchestrator | Thursday 18 September 2025 10:10:41 +0000 (0:00:00.076) 0:00:06.745 **** 2025-09-18 10:10:45.611677 | orchestrator | changed: [testbed-manager] 2025-09-18 10:10:45.611688 | orchestrator | 2025-09-18 10:10:45.611698 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-18 10:10:45.611709 | orchestrator | Thursday 18 September 2025 10:10:42 +0000 (0:00:00.515) 0:00:07.260 **** 2025-09-18 10:10:45.611719 | orchestrator | changed: [testbed-manager] 2025-09-18 10:10:45.611730 | orchestrator | 2025-09-18 10:10:45.611740 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-18 10:10:45.611751 | orchestrator | Thursday 18 September 2025 10:10:43 +0000 (0:00:01.035) 0:00:08.295 **** 2025-09-18 10:10:45.611761 | orchestrator | ok: [testbed-manager] 2025-09-18 10:10:45.611772 | orchestrator | 2025-09-18 10:10:45.611782 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-18 10:10:45.611793 | orchestrator | Thursday 18 September 2025 10:10:44 +0000 (0:00:00.903) 0:00:09.198 **** 2025-09-18 10:10:45.611813 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-09-18 10:10:45.611831 | orchestrator | 2025-09-18 10:10:45.611842 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-18 10:10:45.611853 | orchestrator | Thursday 18 September 2025 10:10:44 +0000 (0:00:00.073) 0:00:09.272 **** 2025-09-18 10:10:45.611863 | orchestrator | changed: [testbed-manager] 2025-09-18 10:10:45.611874 | orchestrator | 2025-09-18 10:10:45.611884 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:10:45.611897 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:10:45.611907 | orchestrator | 2025-09-18 10:10:45.611918 | orchestrator | 2025-09-18 10:10:45.611929 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:10:45.611939 | orchestrator | Thursday 18 September 2025 10:10:45 +0000 (0:00:01.108) 0:00:10.380 **** 2025-09-18 10:10:45.611950 | orchestrator | =============================================================================== 2025-09-18 10:10:45.611961 | orchestrator | Gathering Facts --------------------------------------------------------- 3.74s 2025-09-18 10:10:45.611971 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 1.49s 2025-09-18 10:10:45.611981 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.11s 2025-09-18 10:10:45.611992 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-09-18 10:10:45.612003 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.02s 2025-09-18 10:10:45.612013 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.90s 2025-09-18 10:10:45.612043 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-09-18 10:10:45.612054 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-09-18 10:10:45.612065 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.07s 2025-09-18 10:10:45.612075 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.07s 2025-09-18 10:10:45.612086 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-09-18 10:10:45.612097 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.06s 2025-09-18 10:10:45.612107 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.06s 2025-09-18 10:10:45.860135 | orchestrator | + osism apply sshconfig 2025-09-18 10:10:57.964232 | orchestrator | 2025-09-18 10:10:57 | INFO  | Task 78f4904e-b659-4ae2-9c67-12c4ffe901bc (sshconfig) was prepared for execution. 2025-09-18 10:10:57.964395 | orchestrator | 2025-09-18 10:10:57 | INFO  | It takes a moment until task 78f4904e-b659-4ae2-9c67-12c4ffe901bc (sshconfig) has been started and output is visible here. 2025-09-18 10:11:09.393060 | orchestrator | 2025-09-18 10:11:09.393171 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-09-18 10:11:09.393187 | orchestrator | 2025-09-18 10:11:09.393197 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-09-18 10:11:09.393207 | orchestrator | Thursday 18 September 2025 10:11:01 +0000 (0:00:00.156) 0:00:00.156 **** 2025-09-18 10:11:09.393217 | orchestrator | ok: [testbed-manager] 2025-09-18 10:11:09.393228 | orchestrator | 2025-09-18 10:11:09.393238 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-09-18 10:11:09.393248 | orchestrator | Thursday 18 September 2025 10:11:02 +0000 (0:00:00.588) 0:00:00.745 **** 2025-09-18 10:11:09.393257 | orchestrator | changed: [testbed-manager] 2025-09-18 10:11:09.393267 | orchestrator | 2025-09-18 10:11:09.393277 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-09-18 10:11:09.393288 | orchestrator | Thursday 18 September 2025 10:11:02 +0000 (0:00:00.493) 0:00:01.239 **** 2025-09-18 10:11:09.393297 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-09-18 10:11:09.393307 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-09-18 10:11:09.393393 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-09-18 10:11:09.393410 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-09-18 10:11:09.393426 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-09-18 10:11:09.393460 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-09-18 10:11:09.393471 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-09-18 10:11:09.393481 | orchestrator | 2025-09-18 10:11:09.393491 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-09-18 10:11:09.393501 | orchestrator | Thursday 18 September 2025 10:11:08 +0000 (0:00:05.643) 0:00:06.882 **** 2025-09-18 10:11:09.393510 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:11:09.393520 | orchestrator | 2025-09-18 10:11:09.393529 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-09-18 10:11:09.393539 | orchestrator | Thursday 18 September 2025 10:11:08 +0000 (0:00:00.076) 0:00:06.959 **** 2025-09-18 10:11:09.393549 | orchestrator | changed: [testbed-manager] 2025-09-18 10:11:09.393558 | orchestrator | 2025-09-18 10:11:09.393568 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:11:09.393579 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:11:09.393589 | orchestrator | 2025-09-18 10:11:09.393615 | orchestrator | 2025-09-18 10:11:09.393637 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:11:09.393648 | orchestrator | Thursday 18 September 2025 10:11:09 +0000 (0:00:00.587) 0:00:07.546 **** 2025-09-18 10:11:09.393659 | orchestrator | =============================================================================== 2025-09-18 10:11:09.393671 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.64s 2025-09-18 10:11:09.393681 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.59s 2025-09-18 10:11:09.393693 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.59s 2025-09-18 10:11:09.393703 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-09-18 10:11:09.393714 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.08s 2025-09-18 10:11:09.655111 | orchestrator | + osism apply known-hosts 2025-09-18 10:11:21.648400 | orchestrator | 2025-09-18 10:11:21 | INFO  | Task 4eac1c9a-f585-4171-a33a-2f88443a4b02 (known-hosts) was prepared for execution. 2025-09-18 10:11:21.648523 | orchestrator | 2025-09-18 10:11:21 | INFO  | It takes a moment until task 4eac1c9a-f585-4171-a33a-2f88443a4b02 (known-hosts) has been started and output is visible here. 2025-09-18 10:11:37.901060 | orchestrator | 2025-09-18 10:11:37.901178 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-09-18 10:11:37.901196 | orchestrator | 2025-09-18 10:11:37.901209 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-09-18 10:11:37.901221 | orchestrator | Thursday 18 September 2025 10:11:25 +0000 (0:00:00.165) 0:00:00.165 **** 2025-09-18 10:11:37.901233 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-18 10:11:37.901244 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-18 10:11:37.901255 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-18 10:11:37.901266 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-18 10:11:37.901276 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-18 10:11:37.901287 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-18 10:11:37.901329 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-18 10:11:37.901341 | orchestrator | 2025-09-18 10:11:37.901352 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-09-18 10:11:37.901364 | orchestrator | Thursday 18 September 2025 10:11:31 +0000 (0:00:05.922) 0:00:06.087 **** 2025-09-18 10:11:37.901395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-18 10:11:37.901408 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-18 10:11:37.901419 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-18 10:11:37.901430 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-18 10:11:37.901440 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-18 10:11:37.901461 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-18 10:11:37.901473 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-18 10:11:37.901484 | orchestrator | 2025-09-18 10:11:37.901495 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901506 | orchestrator | Thursday 18 September 2025 10:11:31 +0000 (0:00:00.163) 0:00:06.250 **** 2025-09-18 10:11:37.901517 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDyKpqqUlUGScv+OoR81h2aZZoU4SqyQj68suZiSxvHO) 2025-09-18 10:11:37.901533 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsjDScqra8HOOCOyFxK06Z5gzvtzb7oW9P5T4it10SzxBrpPhfC9uzLOtomfPCCia2tAZGwuwv32KlO5dthfpvynzromWwIwtABlnES2rE0AbH8PDl030SVFw3D70zHfZwnI+thKJgJp9+Pz3BODE2+hevMj3UjbyCDxLsXnaBy08yHiLtj3rb/gzaR6y7pQnLPfgOt6pOXU/7lZEPmD89desXtJHyd7sNAcOjQolHbIpIj+BnkHj2JFgdKeSJot4BJRNaNKYo/V3/JclHgeKANz+UBkKnQWGNVjGm6KqD5tmN2TeKkehmVQ8zJ1QCKjHCfWLhqxMndRIBisCwxWauOgsTjGqnZWUpGc8NhsUPGtzxpNLvGtUJxKdbgIDP4Foidae4uDkYBBVB6aw2REZLG1K8hnJxxACcCtOh4v/Sr3ubTfYBXdCbXYMc0an8huwVlpTaKB1cT5weRaHD4DBA+vPt1g8DMDtqOdYQESi3N2sFVFBujENTduBcES/bHIM=) 2025-09-18 10:11:37.901548 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBL1A+ka39z5vD516oYbZJbO1AgAjqs5oB3ZGDa1JIJ5J2nmueuJijX49P42QICb2MCbbj1bOlGM6MMqeSUP31A=) 2025-09-18 10:11:37.901561 | orchestrator | 2025-09-18 10:11:37.901573 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901583 | orchestrator | Thursday 18 September 2025 10:11:32 +0000 (0:00:01.174) 0:00:07.425 **** 2025-09-18 10:11:37.901594 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtw1IDyFHsF4MLcwz9MsxFmj7AGuVZyg68FQZfiE09v) 2025-09-18 10:11:37.901638 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7krU5VjM53Tr+xD41iIHyBtOPg8l8WHROzL8M8i41Q1iRXMfnFkdjRUgEpPyDWiKSimpeYcFqE5n8okwhzWxh6Fb79N9B3uo9GmaWJLl7ADlI1+sisQmxP14Qapjt6dpRPxZAVhbc4fd5zM18t8M/0A8wBQX/Bgzsz8vCtCa8B7PAW8+7GQo5V/uU7yk4AuEfPPSFO8D08u2PLGTR3Dftg/KG+J104EWZy/M+1SiO8H1UwWOAt+CMtH0irfgWOp8fFJJzWoeEf8BwMKAFeaFQcYajgLCKumW3XIokL9SpYSUBtKVb5KGLhH9T01yaDHfvdkt+s3y3udYa18MnqpCVc/Fkc3qIxuQNfyYrgqXk4/IYuJBpU+7JoqUAT7rJiplWpjEta02rUVYALifxmxUndKrDMQZF9CgbvJkUaBfXyvcdelTibDodua1d9juRdu6qKJAnd0g6mx0he8ORMPlZsUGM4tDjXK3BKC8nNmeYi0MwfAm18cm0KtDlJQ+x2FU=) 2025-09-18 10:11:37.901651 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhM0SAUsBl6uaCtLF69jGw6BlKACb6/y+ImvJpk3I57x6EyIjCXO2jrF7YCphItOr6/aD9Br0llZUP4p1enjAw=) 2025-09-18 10:11:37.901669 | orchestrator | 2025-09-18 10:11:37.901679 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901690 | orchestrator | Thursday 18 September 2025 10:11:33 +0000 (0:00:00.981) 0:00:08.407 **** 2025-09-18 10:11:37.901701 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/GLZdb+Ld293zbO9Fw2Z4b/2mUg3C5lrq4uLNVeyhN) 2025-09-18 10:11:37.901712 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDctL3zcHkSyZQGK4PQRO9HIDSWODFuYpX5NZJ5KCiGXpP9ZdZBTZSetco/F4iz53DBoqdBg3Mlf5y1I4kH4plTWVvk3A03hcdAPlaTInYhIBzdrHQLsSVTaiFvuw9WxnrgyFQY5DBz94Z/ZSae4l9e/qqUIhvlL6oL0SbznhwF0FSh2oQDe6IXtIoF97ZJbYTJh3HjOMs5trQu04v0HAxjDMwgNKQLmoSYKeVSowqIknmvuCK6oSRSrIFIUYUzD8iqCU7rnGBKYvD4KlYmlxodp3giyhYAOPITryOaV7S46kAapdILAQkN530AQi9wU4mRTVIXaZcmJOxxy5rPs6y6idcad2DObt1m8fUoTLpdVqBaarf4B9xpceJ3alMJNnWqYM9AdO4yoxfK5aVfoM5SNLLBmCEeMr5TXqA7ChfwBu+rYfv3kfjSl8cIDjespkmSBu/kP/aLLJ3cPSWzbkH8XrMcKNNASl0eGowl7pUKdb9RsQEWxrtBZwd/OrsdpKc=) 2025-09-18 10:11:37.901723 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVF4Z3nQJLVWAjAJVI7m3Baj0B8psuj8gioMTM1o9Q/KA5aiLAwfkUhcKWIXVT3B01UVlOiq23VIJBf8e1WFP4=) 2025-09-18 10:11:37.901734 | orchestrator | 2025-09-18 10:11:37.901745 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901756 | orchestrator | Thursday 18 September 2025 10:11:34 +0000 (0:00:01.046) 0:00:09.454 **** 2025-09-18 10:11:37.901767 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIQ+sPSJwW8LPqku1Cq5s91waKvnL5MMZ9LvhvU7K+Bg) 2025-09-18 10:11:37.901834 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnFH3Ay6n2Yz+t0c8OqFLgIkd8v9EJ7ba9ZzrzGcNZt68FgVB2md84h7tcOzhG/UAw9Htqnz+mP9d4npCE/KqNnOugViUCPGnaU1qqzri2cLto9GqtuG8FbEsSLVtIb/q0cD9GRxOD6MmZyeZrePC/n3kNdHuxVq4/MmTbXLWO2engHKLB5S4MHOH6V6EqNuFPaEcVLyDLmWiRrFnDUEC3vTpKHN2vIAnijEX7VsIexzdGgD5Ntlq2RmFNZNTgW+o/umKa/fB3eHQvF1rY6WseDA8deCq0m1kEXx2aQ3InUnQA9KBH5aFzbZmkTd0kk7hHVcuftqvQv+u9Bkj40uD2QNGDT+XhxXn0/QdkhB1BsjziidG6h5B0aFiaD9QAR/s96UukLgJ+GsJYMfJkpeGAf3nz3KR0gkleMG1w0nXEwanXQyXPZOrCzzrZ2loNvMkOGj9V13VxGO5Bpio2vqoF3AdTr7cvqAwKnmnzxZuM5HmziWKoSy70PFuzZTcVH/0=) 2025-09-18 10:11:37.901846 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDD6D7dUfGO7qj4+Hfc+Pn1nMXnfgDJ1sluwe9qUKQDlkjQ99Ex2shwl0K/I30/tZX2h3zwL29BKFIhO9/J785M=) 2025-09-18 10:11:37.901857 | orchestrator | 2025-09-18 10:11:37.901868 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901879 | orchestrator | Thursday 18 September 2025 10:11:35 +0000 (0:00:01.062) 0:00:10.517 **** 2025-09-18 10:11:37.901889 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoVe1XeLeLqM5+4harP2kCyxvugm1+876DWPPULDRBVHtIoj6/oUrgj5E39oL5CCJugcfKJCAq4Jz8qDE4QmsY=) 2025-09-18 10:11:37.901901 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcPqYVQJmu23MUxoY4AuZ2HiTLiO6qvcQMa3hBfXx5+q7EOYdeRwB4e2aMOFRE+LGkz7cHEEiM2q2dOhwuMlDdx2QH/CkpkHCVTKrDGcRKfZaqIKkHgRwKwBLFHcIB9sbVM/T9dx7LZYiRXnB782G5RiCjToASeCCCrRQAq76mue8qYlee19Xz+K22WbF3IQZIEQ3pZMXiWx8imHDcU7K64iGN9NJKrpWd486xU9Vx4VndRPgnz6NguiqBtdx2/e7AmAiDt4Ivg0CVqvq74RW+vocY5ler69CanMTFkA5X2TmkW6DP41y5lto94kuX2ikkrxgs7jj3NSofMdISWk+YrfdX9A8Azu3tt4uA4yIkAt+RIBs7mzCYTkQ7jQ++ApZd/+j65se00MlGMuqdzhxYN9Ju3wQUswYxpkC2QP2q2IdShCQLF78xMmoQlUGBbwo6VjVuazySxWt2HaBRic++h9/ixQaBmAbPoKbbJXwloeZKmuDVkZbQFnuDc/YnC1s=) 2025-09-18 10:11:37.901912 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHqPMioqBfIu7X3PuTSLsRiVc0CyOMiKPX0HbBs49HcG) 2025-09-18 10:11:37.901930 | orchestrator | 2025-09-18 10:11:37.901941 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:37.901952 | orchestrator | Thursday 18 September 2025 10:11:36 +0000 (0:00:01.031) 0:00:11.548 **** 2025-09-18 10:11:37.901969 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLL82nRaCRobfjEZmbEmzUoaG6NVlQZmLruHZMUBA9UDnXv6jq6nD9lmtEuHcVjqbTRt1lWMQ5Js8xvV5HrPW4c=) 2025-09-18 10:11:48.543601 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVfkZNb+gU+n5CUKfPfitJzhdcpBGBRNVV3PvHRGgcd) 2025-09-18 10:11:48.543739 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRXs0KnyRGC0O+o+Yzbdu7SfSTIyz3jw3fQ8Ad3wmUtmDXzbqdxbxDtwzk89P09jIHaiWDE41SByyM8ZMCe+xeNCY2RNyu6QM5QxylUjS1UIJZ7KNhwDNdKClWWFm3dREjXw0WBoTqQtVgAdQvHtu3tESsSm7/JGevE9u3OMcJtBjVz6iaqnALiyT/slUpaRskGZJbQE0hLsHY5JH73lcVcWdudO8WCXmSEBU6nUb80ZnDX+sf/Q6+jKaLPVeNTD2vLcwNRhJBg862XuW1nTTNfQcOgBopfilTtFrjEqJGJekBCz/7O8oUJkU5a0JO1J2FSKNWAwiAuHUEC4UjrhKtsMaX4F0uLbN/voRQuba4qTQOFSvuIKzCv+eDHiPvn9Uorl2ZfaPZF+GxsSmf3Pbfb8AuAYDc5KeNxT/CGDgQxtYn6M+YVDIXIWscqjW0zuKO1mIxawpXT/Z5DGVHB1u3xXOkvzwQr6TRRkq0f25VdCMc1EQk07BW2QGErmC259M=) 2025-09-18 10:11:48.543768 | orchestrator | 2025-09-18 10:11:48.543798 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:48.543820 | orchestrator | Thursday 18 September 2025 10:11:37 +0000 (0:00:01.038) 0:00:12.587 **** 2025-09-18 10:11:48.543838 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDShI+avw8/f/PqVR5DEpu8UtCge2ojalBgwfPTD9xRIyR8MIqZb1o8bbEpyIyF4uUe3R4sklZdKpf4sIM8o5dRSc5Z1PLMCvf3nmzgGeKrM9pX7IhqNmO6jAry6oJsf6KkOwlX/aEZhuGiYD5TFlB1ND7XXhz3M3AxM100CxQR7Hmj8xDwW8oTN1lnBE7NcOH2P8pZZrphNRvmXc0d5yJyEzxRQtwYnjqtWKsy8IiKa4pBjw/9/MkdsX0FCGOE7SkPDOtEV8rQBTG9M98bUBA6mOxpwq5UZZKHdI+BW8IHNy04yaJA5hWApSRtG7sgeJqpxONkYujAEP9DXJ5DoM2zPP9uVBLQrTqRXbSOGDjdX7oi/u5cRon2M8Wo8rd/khzoFiUqrMMnKatAImAm7mkgltwdfhOwqv4Zib5s4nlenx7KDfz2hIpftb/O/mZtnpNehWhkQDfd+1GUdDSV78uTbfW8CF3jo7ZuN0aFlgMoF6dDwfZhKzHx+b89rQGEPxM=) 2025-09-18 10:11:48.543857 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNT+NjB8mFk3WS4Wn/mCOyBpaz48qa7+IP9bz3fSCNlEU7AoEwMT7LXSVsKnI3X14T51ucMbZo9/mBJIfZot7VE=) 2025-09-18 10:11:48.543877 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHeJKuqRRVRbAy011ef2k1q/ISVDQJSuTBRcBPCvBqP6) 2025-09-18 10:11:48.543895 | orchestrator | 2025-09-18 10:11:48.543914 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-09-18 10:11:48.543933 | orchestrator | Thursday 18 September 2025 10:11:38 +0000 (0:00:01.034) 0:00:13.621 **** 2025-09-18 10:11:48.543953 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-09-18 10:11:48.543965 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-09-18 10:11:48.543976 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-09-18 10:11:48.543986 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-09-18 10:11:48.543997 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-09-18 10:11:48.544008 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-09-18 10:11:48.544018 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-09-18 10:11:48.544029 | orchestrator | 2025-09-18 10:11:48.544040 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-09-18 10:11:48.544052 | orchestrator | Thursday 18 September 2025 10:11:44 +0000 (0:00:05.278) 0:00:18.900 **** 2025-09-18 10:11:48.544083 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-09-18 10:11:48.544096 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-09-18 10:11:48.544132 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-09-18 10:11:48.544146 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-09-18 10:11:48.544158 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-09-18 10:11:48.544175 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-09-18 10:11:48.544187 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-09-18 10:11:48.544199 | orchestrator | 2025-09-18 10:11:48.544230 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:48.544244 | orchestrator | Thursday 18 September 2025 10:11:44 +0000 (0:00:00.162) 0:00:19.062 **** 2025-09-18 10:11:48.544255 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDyKpqqUlUGScv+OoR81h2aZZoU4SqyQj68suZiSxvHO) 2025-09-18 10:11:48.544271 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsjDScqra8HOOCOyFxK06Z5gzvtzb7oW9P5T4it10SzxBrpPhfC9uzLOtomfPCCia2tAZGwuwv32KlO5dthfpvynzromWwIwtABlnES2rE0AbH8PDl030SVFw3D70zHfZwnI+thKJgJp9+Pz3BODE2+hevMj3UjbyCDxLsXnaBy08yHiLtj3rb/gzaR6y7pQnLPfgOt6pOXU/7lZEPmD89desXtJHyd7sNAcOjQolHbIpIj+BnkHj2JFgdKeSJot4BJRNaNKYo/V3/JclHgeKANz+UBkKnQWGNVjGm6KqD5tmN2TeKkehmVQ8zJ1QCKjHCfWLhqxMndRIBisCwxWauOgsTjGqnZWUpGc8NhsUPGtzxpNLvGtUJxKdbgIDP4Foidae4uDkYBBVB6aw2REZLG1K8hnJxxACcCtOh4v/Sr3ubTfYBXdCbXYMc0an8huwVlpTaKB1cT5weRaHD4DBA+vPt1g8DMDtqOdYQESi3N2sFVFBujENTduBcES/bHIM=) 2025-09-18 10:11:48.544285 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBBL1A+ka39z5vD516oYbZJbO1AgAjqs5oB3ZGDa1JIJ5J2nmueuJijX49P42QICb2MCbbj1bOlGM6MMqeSUP31A=) 2025-09-18 10:11:48.544330 | orchestrator | 2025-09-18 10:11:48.544343 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:48.544355 | orchestrator | Thursday 18 September 2025 10:11:45 +0000 (0:00:01.037) 0:00:20.099 **** 2025-09-18 10:11:48.544367 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC7krU5VjM53Tr+xD41iIHyBtOPg8l8WHROzL8M8i41Q1iRXMfnFkdjRUgEpPyDWiKSimpeYcFqE5n8okwhzWxh6Fb79N9B3uo9GmaWJLl7ADlI1+sisQmxP14Qapjt6dpRPxZAVhbc4fd5zM18t8M/0A8wBQX/Bgzsz8vCtCa8B7PAW8+7GQo5V/uU7yk4AuEfPPSFO8D08u2PLGTR3Dftg/KG+J104EWZy/M+1SiO8H1UwWOAt+CMtH0irfgWOp8fFJJzWoeEf8BwMKAFeaFQcYajgLCKumW3XIokL9SpYSUBtKVb5KGLhH9T01yaDHfvdkt+s3y3udYa18MnqpCVc/Fkc3qIxuQNfyYrgqXk4/IYuJBpU+7JoqUAT7rJiplWpjEta02rUVYALifxmxUndKrDMQZF9CgbvJkUaBfXyvcdelTibDodua1d9juRdu6qKJAnd0g6mx0he8ORMPlZsUGM4tDjXK3BKC8nNmeYi0MwfAm18cm0KtDlJQ+x2FU=) 2025-09-18 10:11:48.544380 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOhM0SAUsBl6uaCtLF69jGw6BlKACb6/y+ImvJpk3I57x6EyIjCXO2jrF7YCphItOr6/aD9Br0llZUP4p1enjAw=) 2025-09-18 10:11:48.544393 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOtw1IDyFHsF4MLcwz9MsxFmj7AGuVZyg68FQZfiE09v) 2025-09-18 10:11:48.544405 | orchestrator | 2025-09-18 10:11:48.544417 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:48.544429 | orchestrator | Thursday 18 September 2025 10:11:46 +0000 (0:00:01.042) 0:00:21.142 **** 2025-09-18 10:11:48.544457 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIN/GLZdb+Ld293zbO9Fw2Z4b/2mUg3C5lrq4uLNVeyhN) 2025-09-18 10:11:48.544476 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDctL3zcHkSyZQGK4PQRO9HIDSWODFuYpX5NZJ5KCiGXpP9ZdZBTZSetco/F4iz53DBoqdBg3Mlf5y1I4kH4plTWVvk3A03hcdAPlaTInYhIBzdrHQLsSVTaiFvuw9WxnrgyFQY5DBz94Z/ZSae4l9e/qqUIhvlL6oL0SbznhwF0FSh2oQDe6IXtIoF97ZJbYTJh3HjOMs5trQu04v0HAxjDMwgNKQLmoSYKeVSowqIknmvuCK6oSRSrIFIUYUzD8iqCU7rnGBKYvD4KlYmlxodp3giyhYAOPITryOaV7S46kAapdILAQkN530AQi9wU4mRTVIXaZcmJOxxy5rPs6y6idcad2DObt1m8fUoTLpdVqBaarf4B9xpceJ3alMJNnWqYM9AdO4yoxfK5aVfoM5SNLLBmCEeMr5TXqA7ChfwBu+rYfv3kfjSl8cIDjespkmSBu/kP/aLLJ3cPSWzbkH8XrMcKNNASl0eGowl7pUKdb9RsQEWxrtBZwd/OrsdpKc=) 2025-09-18 10:11:48.544494 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFVF4Z3nQJLVWAjAJVI7m3Baj0B8psuj8gioMTM1o9Q/KA5aiLAwfkUhcKWIXVT3B01UVlOiq23VIJBf8e1WFP4=) 2025-09-18 10:11:48.544513 | orchestrator | 2025-09-18 10:11:48.544532 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:48.544550 | orchestrator | Thursday 18 September 2025 10:11:47 +0000 (0:00:01.028) 0:00:22.170 **** 2025-09-18 10:11:48.544568 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDD6D7dUfGO7qj4+Hfc+Pn1nMXnfgDJ1sluwe9qUKQDlkjQ99Ex2shwl0K/I30/tZX2h3zwL29BKFIhO9/J785M=) 2025-09-18 10:11:48.544609 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCnFH3Ay6n2Yz+t0c8OqFLgIkd8v9EJ7ba9ZzrzGcNZt68FgVB2md84h7tcOzhG/UAw9Htqnz+mP9d4npCE/KqNnOugViUCPGnaU1qqzri2cLto9GqtuG8FbEsSLVtIb/q0cD9GRxOD6MmZyeZrePC/n3kNdHuxVq4/MmTbXLWO2engHKLB5S4MHOH6V6EqNuFPaEcVLyDLmWiRrFnDUEC3vTpKHN2vIAnijEX7VsIexzdGgD5Ntlq2RmFNZNTgW+o/umKa/fB3eHQvF1rY6WseDA8deCq0m1kEXx2aQ3InUnQA9KBH5aFzbZmkTd0kk7hHVcuftqvQv+u9Bkj40uD2QNGDT+XhxXn0/QdkhB1BsjziidG6h5B0aFiaD9QAR/s96UukLgJ+GsJYMfJkpeGAf3nz3KR0gkleMG1w0nXEwanXQyXPZOrCzzrZ2loNvMkOGj9V13VxGO5Bpio2vqoF3AdTr7cvqAwKnmnzxZuM5HmziWKoSy70PFuzZTcVH/0=) 2025-09-18 10:11:52.626359 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIQ+sPSJwW8LPqku1Cq5s91waKvnL5MMZ9LvhvU7K+Bg) 2025-09-18 10:11:52.626462 | orchestrator | 2025-09-18 10:11:52.626479 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:52.626492 | orchestrator | Thursday 18 September 2025 10:11:48 +0000 (0:00:01.059) 0:00:23.230 **** 2025-09-18 10:11:52.626506 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDcPqYVQJmu23MUxoY4AuZ2HiTLiO6qvcQMa3hBfXx5+q7EOYdeRwB4e2aMOFRE+LGkz7cHEEiM2q2dOhwuMlDdx2QH/CkpkHCVTKrDGcRKfZaqIKkHgRwKwBLFHcIB9sbVM/T9dx7LZYiRXnB782G5RiCjToASeCCCrRQAq76mue8qYlee19Xz+K22WbF3IQZIEQ3pZMXiWx8imHDcU7K64iGN9NJKrpWd486xU9Vx4VndRPgnz6NguiqBtdx2/e7AmAiDt4Ivg0CVqvq74RW+vocY5ler69CanMTFkA5X2TmkW6DP41y5lto94kuX2ikkrxgs7jj3NSofMdISWk+YrfdX9A8Azu3tt4uA4yIkAt+RIBs7mzCYTkQ7jQ++ApZd/+j65se00MlGMuqdzhxYN9Ju3wQUswYxpkC2QP2q2IdShCQLF78xMmoQlUGBbwo6VjVuazySxWt2HaBRic++h9/ixQaBmAbPoKbbJXwloeZKmuDVkZbQFnuDc/YnC1s=) 2025-09-18 10:11:52.626521 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPoVe1XeLeLqM5+4harP2kCyxvugm1+876DWPPULDRBVHtIoj6/oUrgj5E39oL5CCJugcfKJCAq4Jz8qDE4QmsY=) 2025-09-18 10:11:52.626534 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHqPMioqBfIu7X3PuTSLsRiVc0CyOMiKPX0HbBs49HcG) 2025-09-18 10:11:52.626545 | orchestrator | 2025-09-18 10:11:52.626556 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:52.626567 | orchestrator | Thursday 18 September 2025 10:11:49 +0000 (0:00:01.057) 0:00:24.288 **** 2025-09-18 10:11:52.626578 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBLL82nRaCRobfjEZmbEmzUoaG6NVlQZmLruHZMUBA9UDnXv6jq6nD9lmtEuHcVjqbTRt1lWMQ5Js8xvV5HrPW4c=) 2025-09-18 10:11:52.626613 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCRXs0KnyRGC0O+o+Yzbdu7SfSTIyz3jw3fQ8Ad3wmUtmDXzbqdxbxDtwzk89P09jIHaiWDE41SByyM8ZMCe+xeNCY2RNyu6QM5QxylUjS1UIJZ7KNhwDNdKClWWFm3dREjXw0WBoTqQtVgAdQvHtu3tESsSm7/JGevE9u3OMcJtBjVz6iaqnALiyT/slUpaRskGZJbQE0hLsHY5JH73lcVcWdudO8WCXmSEBU6nUb80ZnDX+sf/Q6+jKaLPVeNTD2vLcwNRhJBg862XuW1nTTNfQcOgBopfilTtFrjEqJGJekBCz/7O8oUJkU5a0JO1J2FSKNWAwiAuHUEC4UjrhKtsMaX4F0uLbN/voRQuba4qTQOFSvuIKzCv+eDHiPvn9Uorl2ZfaPZF+GxsSmf3Pbfb8AuAYDc5KeNxT/CGDgQxtYn6M+YVDIXIWscqjW0zuKO1mIxawpXT/Z5DGVHB1u3xXOkvzwQr6TRRkq0f25VdCMc1EQk07BW2QGErmC259M=) 2025-09-18 10:11:52.626625 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIPVfkZNb+gU+n5CUKfPfitJzhdcpBGBRNVV3PvHRGgcd) 2025-09-18 10:11:52.626636 | orchestrator | 2025-09-18 10:11:52.626647 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-09-18 10:11:52.626657 | orchestrator | Thursday 18 September 2025 10:11:50 +0000 (0:00:01.044) 0:00:25.332 **** 2025-09-18 10:11:52.626668 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDShI+avw8/f/PqVR5DEpu8UtCge2ojalBgwfPTD9xRIyR8MIqZb1o8bbEpyIyF4uUe3R4sklZdKpf4sIM8o5dRSc5Z1PLMCvf3nmzgGeKrM9pX7IhqNmO6jAry6oJsf6KkOwlX/aEZhuGiYD5TFlB1ND7XXhz3M3AxM100CxQR7Hmj8xDwW8oTN1lnBE7NcOH2P8pZZrphNRvmXc0d5yJyEzxRQtwYnjqtWKsy8IiKa4pBjw/9/MkdsX0FCGOE7SkPDOtEV8rQBTG9M98bUBA6mOxpwq5UZZKHdI+BW8IHNy04yaJA5hWApSRtG7sgeJqpxONkYujAEP9DXJ5DoM2zPP9uVBLQrTqRXbSOGDjdX7oi/u5cRon2M8Wo8rd/khzoFiUqrMMnKatAImAm7mkgltwdfhOwqv4Zib5s4nlenx7KDfz2hIpftb/O/mZtnpNehWhkQDfd+1GUdDSV78uTbfW8CF3jo7ZuN0aFlgMoF6dDwfZhKzHx+b89rQGEPxM=) 2025-09-18 10:11:52.626680 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBNT+NjB8mFk3WS4Wn/mCOyBpaz48qa7+IP9bz3fSCNlEU7AoEwMT7LXSVsKnI3X14T51ucMbZo9/mBJIfZot7VE=) 2025-09-18 10:11:52.626691 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHeJKuqRRVRbAy011ef2k1q/ISVDQJSuTBRcBPCvBqP6) 2025-09-18 10:11:52.626702 | orchestrator | 2025-09-18 10:11:52.626713 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-09-18 10:11:52.626724 | orchestrator | Thursday 18 September 2025 10:11:51 +0000 (0:00:00.999) 0:00:26.332 **** 2025-09-18 10:11:52.626735 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-18 10:11:52.626746 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-18 10:11:52.626757 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-18 10:11:52.626767 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-18 10:11:52.626778 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 10:11:52.626805 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-18 10:11:52.626817 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-18 10:11:52.626828 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:11:52.626840 | orchestrator | 2025-09-18 10:11:52.626851 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-09-18 10:11:52.626864 | orchestrator | Thursday 18 September 2025 10:11:51 +0000 (0:00:00.139) 0:00:26.471 **** 2025-09-18 10:11:52.626877 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:11:52.626889 | orchestrator | 2025-09-18 10:11:52.626902 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-09-18 10:11:52.626914 | orchestrator | Thursday 18 September 2025 10:11:51 +0000 (0:00:00.059) 0:00:26.531 **** 2025-09-18 10:11:52.626926 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:11:52.626938 | orchestrator | 2025-09-18 10:11:52.626950 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-09-18 10:11:52.626961 | orchestrator | Thursday 18 September 2025 10:11:51 +0000 (0:00:00.051) 0:00:26.583 **** 2025-09-18 10:11:52.626979 | orchestrator | changed: [testbed-manager] 2025-09-18 10:11:52.626990 | orchestrator | 2025-09-18 10:11:52.627000 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:11:52.627011 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:11:52.627023 | orchestrator | 2025-09-18 10:11:52.627034 | orchestrator | 2025-09-18 10:11:52.627044 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:11:52.627055 | orchestrator | Thursday 18 September 2025 10:11:52 +0000 (0:00:00.491) 0:00:27.074 **** 2025-09-18 10:11:52.627066 | orchestrator | =============================================================================== 2025-09-18 10:11:52.627076 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 5.92s 2025-09-18 10:11:52.627087 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.28s 2025-09-18 10:11:52.627098 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.17s 2025-09-18 10:11:52.627109 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-18 10:11:52.627120 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-18 10:11:52.627130 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-09-18 10:11:52.627158 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-09-18 10:11:52.627169 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-18 10:11:52.627180 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-18 10:11:52.627191 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-18 10:11:52.627201 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-09-18 10:11:52.627212 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-18 10:11:52.627222 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-18 10:11:52.627233 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-09-18 10:11:52.627244 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.00s 2025-09-18 10:11:52.627254 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.98s 2025-09-18 10:11:52.627265 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.49s 2025-09-18 10:11:52.627275 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-09-18 10:11:52.627287 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.16s 2025-09-18 10:11:52.627330 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.14s 2025-09-18 10:11:52.884004 | orchestrator | + osism apply squid 2025-09-18 10:12:05.009133 | orchestrator | 2025-09-18 10:12:05 | INFO  | Task f07a771a-96a1-41fd-8b6f-3dc904b1de5c (squid) was prepared for execution. 2025-09-18 10:12:05.009220 | orchestrator | 2025-09-18 10:12:05 | INFO  | It takes a moment until task f07a771a-96a1-41fd-8b6f-3dc904b1de5c (squid) has been started and output is visible here. 2025-09-18 10:14:01.833806 | orchestrator | 2025-09-18 10:14:01.833926 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-09-18 10:14:01.833943 | orchestrator | 2025-09-18 10:14:01.833955 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-09-18 10:14:01.833966 | orchestrator | Thursday 18 September 2025 10:12:08 +0000 (0:00:00.165) 0:00:00.165 **** 2025-09-18 10:14:01.834067 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:14:01.834083 | orchestrator | 2025-09-18 10:14:01.834094 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-09-18 10:14:01.834137 | orchestrator | Thursday 18 September 2025 10:12:08 +0000 (0:00:00.095) 0:00:00.261 **** 2025-09-18 10:14:01.834149 | orchestrator | ok: [testbed-manager] 2025-09-18 10:14:01.834161 | orchestrator | 2025-09-18 10:14:01.834172 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-09-18 10:14:01.834183 | orchestrator | Thursday 18 September 2025 10:12:10 +0000 (0:00:01.538) 0:00:01.799 **** 2025-09-18 10:14:01.834194 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-09-18 10:14:01.834205 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-09-18 10:14:01.834216 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-09-18 10:14:01.834226 | orchestrator | 2025-09-18 10:14:01.834237 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-09-18 10:14:01.834248 | orchestrator | Thursday 18 September 2025 10:12:11 +0000 (0:00:01.216) 0:00:03.015 **** 2025-09-18 10:14:01.834307 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-09-18 10:14:01.834319 | orchestrator | 2025-09-18 10:14:01.834330 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-09-18 10:14:01.834341 | orchestrator | Thursday 18 September 2025 10:12:12 +0000 (0:00:01.069) 0:00:04.084 **** 2025-09-18 10:14:01.834354 | orchestrator | ok: [testbed-manager] 2025-09-18 10:14:01.834367 | orchestrator | 2025-09-18 10:14:01.834380 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-09-18 10:14:01.834392 | orchestrator | Thursday 18 September 2025 10:12:13 +0000 (0:00:00.349) 0:00:04.434 **** 2025-09-18 10:14:01.834404 | orchestrator | changed: [testbed-manager] 2025-09-18 10:14:01.834417 | orchestrator | 2025-09-18 10:14:01.834429 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-09-18 10:14:01.834442 | orchestrator | Thursday 18 September 2025 10:12:14 +0000 (0:00:00.971) 0:00:05.406 **** 2025-09-18 10:14:01.834454 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-09-18 10:14:01.834466 | orchestrator | ok: [testbed-manager] 2025-09-18 10:14:01.834478 | orchestrator | 2025-09-18 10:14:01.834490 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-09-18 10:14:01.834502 | orchestrator | Thursday 18 September 2025 10:12:45 +0000 (0:00:31.388) 0:00:36.794 **** 2025-09-18 10:14:01.834515 | orchestrator | changed: [testbed-manager] 2025-09-18 10:14:01.834527 | orchestrator | 2025-09-18 10:14:01.834539 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-09-18 10:14:01.834551 | orchestrator | Thursday 18 September 2025 10:13:00 +0000 (0:00:15.341) 0:00:52.136 **** 2025-09-18 10:14:01.834564 | orchestrator | Pausing for 60 seconds 2025-09-18 10:14:01.834577 | orchestrator | changed: [testbed-manager] 2025-09-18 10:14:01.834590 | orchestrator | 2025-09-18 10:14:01.834602 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-09-18 10:14:01.834614 | orchestrator | Thursday 18 September 2025 10:14:00 +0000 (0:01:00.069) 0:01:52.205 **** 2025-09-18 10:14:01.834627 | orchestrator | ok: [testbed-manager] 2025-09-18 10:14:01.834639 | orchestrator | 2025-09-18 10:14:01.834651 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-09-18 10:14:01.834664 | orchestrator | Thursday 18 September 2025 10:14:00 +0000 (0:00:00.063) 0:01:52.269 **** 2025-09-18 10:14:01.834676 | orchestrator | changed: [testbed-manager] 2025-09-18 10:14:01.834689 | orchestrator | 2025-09-18 10:14:01.834700 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:14:01.834711 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:14:01.834722 | orchestrator | 2025-09-18 10:14:01.834732 | orchestrator | 2025-09-18 10:14:01.834743 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:14:01.834753 | orchestrator | Thursday 18 September 2025 10:14:01 +0000 (0:00:00.598) 0:01:52.867 **** 2025-09-18 10:14:01.834773 | orchestrator | =============================================================================== 2025-09-18 10:14:01.834783 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.07s 2025-09-18 10:14:01.834794 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.39s 2025-09-18 10:14:01.834804 | orchestrator | osism.services.squid : Restart squid service --------------------------- 15.34s 2025-09-18 10:14:01.834815 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.54s 2025-09-18 10:14:01.834825 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.22s 2025-09-18 10:14:01.834836 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.07s 2025-09-18 10:14:01.834846 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.97s 2025-09-18 10:14:01.834857 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.60s 2025-09-18 10:14:01.834867 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.35s 2025-09-18 10:14:01.834878 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-09-18 10:14:01.834888 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-09-18 10:14:02.104549 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 10:14:02.105141 | orchestrator | ++ semver latest 9.0.0 2025-09-18 10:14:02.154661 | orchestrator | + [[ -1 -lt 0 ]] 2025-09-18 10:14:02.154680 | orchestrator | + [[ latest != \l\a\t\e\s\t ]] 2025-09-18 10:14:02.155479 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-09-18 10:14:14.158775 | orchestrator | 2025-09-18 10:14:14 | INFO  | Task a7170c4c-6077-412b-886d-d22c0cc6accb (operator) was prepared for execution. 2025-09-18 10:14:14.158889 | orchestrator | 2025-09-18 10:14:14 | INFO  | It takes a moment until task a7170c4c-6077-412b-886d-d22c0cc6accb (operator) has been started and output is visible here. 2025-09-18 10:14:29.423206 | orchestrator | 2025-09-18 10:14:29.423349 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-09-18 10:14:29.423364 | orchestrator | 2025-09-18 10:14:29.423375 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-09-18 10:14:29.423385 | orchestrator | Thursday 18 September 2025 10:14:17 +0000 (0:00:00.140) 0:00:00.140 **** 2025-09-18 10:14:29.423395 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:14:29.423405 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:14:29.423415 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:14:29.423424 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:14:29.423433 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:14:29.423443 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:14:29.423452 | orchestrator | 2025-09-18 10:14:29.423462 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-09-18 10:14:29.423472 | orchestrator | Thursday 18 September 2025 10:14:21 +0000 (0:00:03.267) 0:00:03.408 **** 2025-09-18 10:14:29.423500 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:14:29.423510 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:14:29.423520 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:14:29.423530 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:14:29.423539 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:14:29.423549 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:14:29.423558 | orchestrator | 2025-09-18 10:14:29.423568 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-09-18 10:14:29.423577 | orchestrator | 2025-09-18 10:14:29.423587 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-09-18 10:14:29.423596 | orchestrator | Thursday 18 September 2025 10:14:21 +0000 (0:00:00.788) 0:00:04.196 **** 2025-09-18 10:14:29.423606 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:14:29.423615 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:14:29.423624 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:14:29.423634 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:14:29.423643 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:14:29.423652 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:14:29.423683 | orchestrator | 2025-09-18 10:14:29.423694 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-09-18 10:14:29.423704 | orchestrator | Thursday 18 September 2025 10:14:22 +0000 (0:00:00.140) 0:00:04.337 **** 2025-09-18 10:14:29.423713 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:14:29.423722 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:14:29.423732 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:14:29.423741 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:14:29.423753 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:14:29.423763 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:14:29.423774 | orchestrator | 2025-09-18 10:14:29.423785 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-09-18 10:14:29.423796 | orchestrator | Thursday 18 September 2025 10:14:22 +0000 (0:00:00.140) 0:00:04.478 **** 2025-09-18 10:14:29.423807 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:29.423819 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:29.423830 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:29.423841 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:29.423851 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:29.423862 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:29.423873 | orchestrator | 2025-09-18 10:14:29.423885 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-09-18 10:14:29.423896 | orchestrator | Thursday 18 September 2025 10:14:22 +0000 (0:00:00.594) 0:00:05.073 **** 2025-09-18 10:14:29.423907 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:29.423917 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:29.423928 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:29.423938 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:29.423949 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:29.423959 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:29.423970 | orchestrator | 2025-09-18 10:14:29.423981 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-09-18 10:14:29.423993 | orchestrator | Thursday 18 September 2025 10:14:23 +0000 (0:00:00.826) 0:00:05.899 **** 2025-09-18 10:14:29.424003 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-09-18 10:14:29.424015 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-09-18 10:14:29.424025 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-09-18 10:14:29.424037 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-09-18 10:14:29.424048 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-09-18 10:14:29.424058 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-09-18 10:14:29.424069 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-09-18 10:14:29.424080 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-09-18 10:14:29.424092 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-09-18 10:14:29.424103 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-09-18 10:14:29.424113 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-09-18 10:14:29.424122 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-09-18 10:14:29.424131 | orchestrator | 2025-09-18 10:14:29.424141 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-09-18 10:14:29.424151 | orchestrator | Thursday 18 September 2025 10:14:24 +0000 (0:00:01.189) 0:00:07.089 **** 2025-09-18 10:14:29.424160 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:29.424169 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:29.424179 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:29.424188 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:29.424197 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:29.424207 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:29.424216 | orchestrator | 2025-09-18 10:14:29.424225 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-09-18 10:14:29.424258 | orchestrator | Thursday 18 September 2025 10:14:26 +0000 (0:00:01.240) 0:00:08.330 **** 2025-09-18 10:14:29.424268 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-09-18 10:14:29.424285 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-09-18 10:14:29.424295 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-09-18 10:14:29.424305 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424331 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424341 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424351 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424360 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424370 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-09-18 10:14:29.424379 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424388 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424398 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424407 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424417 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424426 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-09-18 10:14:29.424435 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424445 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424454 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424464 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424473 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424483 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-09-18 10:14:29.424492 | orchestrator | 2025-09-18 10:14:29.424502 | orchestrator | TASK [osism.commons.operator : Set custom environment variables in .bashrc configuration file] *** 2025-09-18 10:14:29.424511 | orchestrator | Thursday 18 September 2025 10:14:27 +0000 (0:00:01.290) 0:00:09.620 **** 2025-09-18 10:14:29.424521 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:29.424530 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:29.424540 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:29.424549 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:29.424558 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:29.424567 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:29.424577 | orchestrator | 2025-09-18 10:14:29.424586 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-09-18 10:14:29.424595 | orchestrator | Thursday 18 September 2025 10:14:27 +0000 (0:00:00.162) 0:00:09.783 **** 2025-09-18 10:14:29.424605 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:29.424614 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:29.424623 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:29.424633 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:29.424642 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:29.424651 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:29.424660 | orchestrator | 2025-09-18 10:14:29.424670 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-09-18 10:14:29.424679 | orchestrator | Thursday 18 September 2025 10:14:28 +0000 (0:00:00.560) 0:00:10.343 **** 2025-09-18 10:14:29.424689 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:29.424698 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:29.424707 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:29.424716 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:29.424725 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:29.424735 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:29.424744 | orchestrator | 2025-09-18 10:14:29.424760 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-09-18 10:14:29.424769 | orchestrator | Thursday 18 September 2025 10:14:28 +0000 (0:00:00.185) 0:00:10.529 **** 2025-09-18 10:14:29.424779 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 10:14:29.424793 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 10:14:29.424802 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:29.424812 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:29.424821 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-18 10:14:29.424831 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:29.424840 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-18 10:14:29.424849 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:29.424858 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 10:14:29.424868 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:29.424877 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:14:29.424886 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:29.424896 | orchestrator | 2025-09-18 10:14:29.424905 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-09-18 10:14:29.424914 | orchestrator | Thursday 18 September 2025 10:14:28 +0000 (0:00:00.735) 0:00:11.265 **** 2025-09-18 10:14:29.424924 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:29.424933 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:29.424942 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:29.424952 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:29.424961 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:29.424970 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:29.424979 | orchestrator | 2025-09-18 10:14:29.424989 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-09-18 10:14:29.424998 | orchestrator | Thursday 18 September 2025 10:14:29 +0000 (0:00:00.138) 0:00:11.404 **** 2025-09-18 10:14:29.425007 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:29.425017 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:29.425026 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:29.425035 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:29.425044 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:29.425054 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:29.425063 | orchestrator | 2025-09-18 10:14:29.425072 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-09-18 10:14:29.425088 | orchestrator | Thursday 18 September 2025 10:14:29 +0000 (0:00:00.142) 0:00:11.547 **** 2025-09-18 10:14:29.425101 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:29.425111 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:29.425121 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:29.425130 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:29.425145 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:30.513859 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:30.513952 | orchestrator | 2025-09-18 10:14:30.513965 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-09-18 10:14:30.513977 | orchestrator | Thursday 18 September 2025 10:14:29 +0000 (0:00:00.137) 0:00:11.685 **** 2025-09-18 10:14:30.513987 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:14:30.513997 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:14:30.514006 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:14:30.514093 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:14:30.514106 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:14:30.514116 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:14:30.514126 | orchestrator | 2025-09-18 10:14:30.514136 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-09-18 10:14:30.514146 | orchestrator | Thursday 18 September 2025 10:14:30 +0000 (0:00:00.646) 0:00:12.331 **** 2025-09-18 10:14:30.514156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:14:30.514165 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:14:30.514174 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:14:30.514208 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:14:30.514218 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:14:30.514228 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:14:30.514260 | orchestrator | 2025-09-18 10:14:30.514270 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:14:30.514281 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514293 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514303 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514312 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514322 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514331 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:14:30.514341 | orchestrator | 2025-09-18 10:14:30.514350 | orchestrator | 2025-09-18 10:14:30.514360 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:14:30.514369 | orchestrator | Thursday 18 September 2025 10:14:30 +0000 (0:00:00.227) 0:00:12.559 **** 2025-09-18 10:14:30.514379 | orchestrator | =============================================================================== 2025-09-18 10:14:30.514389 | orchestrator | Gathering Facts --------------------------------------------------------- 3.27s 2025-09-18 10:14:30.514398 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.29s 2025-09-18 10:14:30.514409 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.24s 2025-09-18 10:14:30.514420 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.19s 2025-09-18 10:14:30.514431 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.83s 2025-09-18 10:14:30.514442 | orchestrator | Do not require tty for all users ---------------------------------------- 0.79s 2025-09-18 10:14:30.514452 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.74s 2025-09-18 10:14:30.514463 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.65s 2025-09-18 10:14:30.514473 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.59s 2025-09-18 10:14:30.514484 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.56s 2025-09-18 10:14:30.514495 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-09-18 10:14:30.514505 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.19s 2025-09-18 10:14:30.514516 | orchestrator | osism.commons.operator : Set custom environment variables in .bashrc configuration file --- 0.16s 2025-09-18 10:14:30.514526 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.14s 2025-09-18 10:14:30.514537 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.14s 2025-09-18 10:14:30.514547 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.14s 2025-09-18 10:14:30.514557 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.14s 2025-09-18 10:14:30.514568 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.14s 2025-09-18 10:14:30.779784 | orchestrator | + osism apply --environment custom facts 2025-09-18 10:14:32.589935 | orchestrator | 2025-09-18 10:14:32 | INFO  | Trying to run play facts in environment custom 2025-09-18 10:14:42.747794 | orchestrator | 2025-09-18 10:14:42 | INFO  | Task 66adb4bd-889c-4303-8231-5b2de367efe7 (facts) was prepared for execution. 2025-09-18 10:14:42.747930 | orchestrator | 2025-09-18 10:14:42 | INFO  | It takes a moment until task 66adb4bd-889c-4303-8231-5b2de367efe7 (facts) has been started and output is visible here. 2025-09-18 10:15:26.245821 | orchestrator | 2025-09-18 10:15:26.245941 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-09-18 10:15:26.245958 | orchestrator | 2025-09-18 10:15:26.245971 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 10:15:26.245983 | orchestrator | Thursday 18 September 2025 10:14:46 +0000 (0:00:00.098) 0:00:00.098 **** 2025-09-18 10:15:26.245994 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:26.246006 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.246089 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:15:26.246110 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:15:26.246160 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:15:26.246211 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.246230 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.246249 | orchestrator | 2025-09-18 10:15:26.246268 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-09-18 10:15:26.246287 | orchestrator | Thursday 18 September 2025 10:14:47 +0000 (0:00:01.433) 0:00:01.531 **** 2025-09-18 10:15:26.246302 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:26.246314 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.246324 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.246335 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:15:26.246351 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.246371 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:15:26.246383 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:15:26.246397 | orchestrator | 2025-09-18 10:15:26.246409 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-09-18 10:15:26.246421 | orchestrator | 2025-09-18 10:15:26.246434 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 10:15:26.246446 | orchestrator | Thursday 18 September 2025 10:14:49 +0000 (0:00:01.154) 0:00:02.685 **** 2025-09-18 10:15:26.246458 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.246470 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.246482 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.246495 | orchestrator | 2025-09-18 10:15:26.246506 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 10:15:26.246519 | orchestrator | Thursday 18 September 2025 10:14:49 +0000 (0:00:00.116) 0:00:02.802 **** 2025-09-18 10:15:26.246531 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.246543 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.246555 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.246567 | orchestrator | 2025-09-18 10:15:26.246579 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 10:15:26.246591 | orchestrator | Thursday 18 September 2025 10:14:49 +0000 (0:00:00.190) 0:00:02.992 **** 2025-09-18 10:15:26.246603 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.246616 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.246628 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.246639 | orchestrator | 2025-09-18 10:15:26.246651 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 10:15:26.246664 | orchestrator | Thursday 18 September 2025 10:14:49 +0000 (0:00:00.186) 0:00:03.179 **** 2025-09-18 10:15:26.246695 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:15:26.246721 | orchestrator | 2025-09-18 10:15:26.246732 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 10:15:26.246743 | orchestrator | Thursday 18 September 2025 10:14:49 +0000 (0:00:00.132) 0:00:03.312 **** 2025-09-18 10:15:26.246783 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.246795 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.246805 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.246816 | orchestrator | 2025-09-18 10:15:26.246827 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 10:15:26.246837 | orchestrator | Thursday 18 September 2025 10:14:50 +0000 (0:00:00.407) 0:00:03.719 **** 2025-09-18 10:15:26.246848 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:15:26.246859 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:15:26.246869 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:15:26.246880 | orchestrator | 2025-09-18 10:15:26.246890 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 10:15:26.246901 | orchestrator | Thursday 18 September 2025 10:14:50 +0000 (0:00:00.100) 0:00:03.819 **** 2025-09-18 10:15:26.246911 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.246922 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.246932 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.246943 | orchestrator | 2025-09-18 10:15:26.246953 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 10:15:26.246964 | orchestrator | Thursday 18 September 2025 10:14:51 +0000 (0:00:00.956) 0:00:04.775 **** 2025-09-18 10:15:26.246975 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.246985 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.246996 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.247006 | orchestrator | 2025-09-18 10:15:26.247017 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 10:15:26.247028 | orchestrator | Thursday 18 September 2025 10:14:51 +0000 (0:00:00.448) 0:00:05.224 **** 2025-09-18 10:15:26.247039 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.247049 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.247060 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.247070 | orchestrator | 2025-09-18 10:15:26.247081 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 10:15:26.247091 | orchestrator | Thursday 18 September 2025 10:14:52 +0000 (0:00:00.984) 0:00:06.208 **** 2025-09-18 10:15:26.247102 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.247112 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.247123 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.247133 | orchestrator | 2025-09-18 10:15:26.247144 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-09-18 10:15:26.247155 | orchestrator | Thursday 18 September 2025 10:15:09 +0000 (0:00:17.221) 0:00:23.430 **** 2025-09-18 10:15:26.247165 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:15:26.247176 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:15:26.247223 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:15:26.247235 | orchestrator | 2025-09-18 10:15:26.247264 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-09-18 10:15:26.247296 | orchestrator | Thursday 18 September 2025 10:15:09 +0000 (0:00:00.106) 0:00:23.536 **** 2025-09-18 10:15:26.247308 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:26.247319 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:26.247329 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:26.247340 | orchestrator | 2025-09-18 10:15:26.247350 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-09-18 10:15:26.247361 | orchestrator | Thursday 18 September 2025 10:15:17 +0000 (0:00:07.299) 0:00:30.836 **** 2025-09-18 10:15:26.247372 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.247382 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.247393 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.247403 | orchestrator | 2025-09-18 10:15:26.247414 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-09-18 10:15:26.247425 | orchestrator | Thursday 18 September 2025 10:15:17 +0000 (0:00:00.404) 0:00:31.241 **** 2025-09-18 10:15:26.247435 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-09-18 10:15:26.247456 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-09-18 10:15:26.247467 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-09-18 10:15:26.247477 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-09-18 10:15:26.247488 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-09-18 10:15:26.247498 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-09-18 10:15:26.247509 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-09-18 10:15:26.247519 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-09-18 10:15:26.247530 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-09-18 10:15:26.247541 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-09-18 10:15:26.247551 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-09-18 10:15:26.247562 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-09-18 10:15:26.247572 | orchestrator | 2025-09-18 10:15:26.247583 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 10:15:26.247593 | orchestrator | Thursday 18 September 2025 10:15:21 +0000 (0:00:03.503) 0:00:34.745 **** 2025-09-18 10:15:26.247604 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.247615 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.247625 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.247636 | orchestrator | 2025-09-18 10:15:26.247647 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 10:15:26.247657 | orchestrator | 2025-09-18 10:15:26.247668 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:15:26.247679 | orchestrator | Thursday 18 September 2025 10:15:22 +0000 (0:00:01.215) 0:00:35.960 **** 2025-09-18 10:15:26.247689 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:15:26.247700 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:15:26.247711 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:15:26.247721 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:26.247732 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:26.247742 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:26.247752 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:26.247763 | orchestrator | 2025-09-18 10:15:26.247774 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:15:26.247785 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:15:26.247797 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:15:26.247809 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:15:26.247820 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:15:26.247830 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:15:26.247841 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:15:26.247852 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:15:26.247863 | orchestrator | 2025-09-18 10:15:26.247873 | orchestrator | 2025-09-18 10:15:26.247884 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:15:26.247895 | orchestrator | Thursday 18 September 2025 10:15:26 +0000 (0:00:03.816) 0:00:39.777 **** 2025-09-18 10:15:26.247905 | orchestrator | =============================================================================== 2025-09-18 10:15:26.247923 | orchestrator | osism.commons.repository : Update package cache ------------------------ 17.22s 2025-09-18 10:15:26.247934 | orchestrator | Install required packages (Debian) -------------------------------------- 7.30s 2025-09-18 10:15:26.247944 | orchestrator | Gathers facts about hosts ----------------------------------------------- 3.82s 2025-09-18 10:15:26.247955 | orchestrator | Copy fact files --------------------------------------------------------- 3.50s 2025-09-18 10:15:26.247970 | orchestrator | Create custom facts directory ------------------------------------------- 1.43s 2025-09-18 10:15:26.247981 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.22s 2025-09-18 10:15:26.247998 | orchestrator | Copy fact file ---------------------------------------------------------- 1.15s 2025-09-18 10:15:26.443314 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 0.98s 2025-09-18 10:15:26.443407 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.96s 2025-09-18 10:15:26.443421 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.45s 2025-09-18 10:15:26.443433 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.41s 2025-09-18 10:15:26.443444 | orchestrator | Create custom facts directory ------------------------------------------- 0.40s 2025-09-18 10:15:26.443455 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.19s 2025-09-18 10:15:26.443466 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.19s 2025-09-18 10:15:26.443476 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-09-18 10:15:26.443487 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.12s 2025-09-18 10:15:26.443498 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.11s 2025-09-18 10:15:26.443509 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.10s 2025-09-18 10:15:26.690905 | orchestrator | + osism apply bootstrap 2025-09-18 10:15:38.687803 | orchestrator | 2025-09-18 10:15:38 | INFO  | Task 6b3e9986-bb7a-45f3-b266-ca0ac6ee08b5 (bootstrap) was prepared for execution. 2025-09-18 10:15:38.687897 | orchestrator | 2025-09-18 10:15:38 | INFO  | It takes a moment until task 6b3e9986-bb7a-45f3-b266-ca0ac6ee08b5 (bootstrap) has been started and output is visible here. 2025-09-18 10:15:54.139903 | orchestrator | 2025-09-18 10:15:54.140010 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-09-18 10:15:54.140025 | orchestrator | 2025-09-18 10:15:54.140036 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-09-18 10:15:54.140048 | orchestrator | Thursday 18 September 2025 10:15:42 +0000 (0:00:00.159) 0:00:00.159 **** 2025-09-18 10:15:54.140058 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:54.140084 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:54.140094 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:54.140104 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:54.140114 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:15:54.140124 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:15:54.140133 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:15:54.140143 | orchestrator | 2025-09-18 10:15:54.140153 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 10:15:54.140186 | orchestrator | 2025-09-18 10:15:54.140197 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:15:54.140207 | orchestrator | Thursday 18 September 2025 10:15:42 +0000 (0:00:00.228) 0:00:00.388 **** 2025-09-18 10:15:54.140217 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:15:54.140227 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:15:54.140237 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:15:54.140247 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:54.140256 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:54.140266 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:54.140275 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:54.140309 | orchestrator | 2025-09-18 10:15:54.140320 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-09-18 10:15:54.140330 | orchestrator | 2025-09-18 10:15:54.140340 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:15:54.140349 | orchestrator | Thursday 18 September 2025 10:15:46 +0000 (0:00:03.630) 0:00:04.018 **** 2025-09-18 10:15:54.140359 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-09-18 10:15:54.140369 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-09-18 10:15:54.140379 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-09-18 10:15:54.140389 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-09-18 10:15:54.140398 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-09-18 10:15:54.140408 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-09-18 10:15:54.140418 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:15:54.140427 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 10:15:54.140437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:15:54.140447 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-09-18 10:15:54.140456 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-09-18 10:15:54.140466 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-09-18 10:15:54.140475 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:15:54.140485 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-09-18 10:15:54.140495 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 10:15:54.140504 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-09-18 10:15:54.140514 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-09-18 10:15:54.140523 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 10:15:54.140533 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-09-18 10:15:54.140542 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:15:54.140552 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-09-18 10:15:54.140562 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 10:15:54.140571 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-09-18 10:15:54.140581 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-09-18 10:15:54.140590 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:15:54.140600 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-09-18 10:15:54.140609 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-09-18 10:15:54.140619 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-09-18 10:15:54.140629 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 10:15:54.140638 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 10:15:54.140648 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-09-18 10:15:54.140657 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 10:15:54.140667 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-09-18 10:15:54.140676 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-09-18 10:15:54.140686 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:15:54.140695 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 10:15:54.140704 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 10:15:54.140714 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:15:54.140723 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-09-18 10:15:54.140733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:15:54.140742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-09-18 10:15:54.140758 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 10:15:54.140768 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:15:54.140779 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-09-18 10:15:54.140804 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-09-18 10:15:54.140815 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:15:54.140825 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:15:54.140850 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-09-18 10:15:54.140860 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-18 10:15:54.140870 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-18 10:15:54.140879 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-18 10:15:54.140889 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-18 10:15:54.140899 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-18 10:15:54.140908 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:15:54.140918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-18 10:15:54.140927 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:15:54.140936 | orchestrator | 2025-09-18 10:15:54.140946 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-09-18 10:15:54.140956 | orchestrator | 2025-09-18 10:15:54.140965 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-09-18 10:15:54.140975 | orchestrator | Thursday 18 September 2025 10:15:46 +0000 (0:00:00.420) 0:00:04.438 **** 2025-09-18 10:15:54.140984 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:54.140993 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:15:54.141003 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:54.141012 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:15:54.141022 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:54.141031 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:54.141041 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:15:54.141050 | orchestrator | 2025-09-18 10:15:54.141059 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-09-18 10:15:54.141069 | orchestrator | Thursday 18 September 2025 10:15:48 +0000 (0:00:01.214) 0:00:05.653 **** 2025-09-18 10:15:54.141079 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:54.141088 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:15:54.141098 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:15:54.141107 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:15:54.141116 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:15:54.141126 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:15:54.141135 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:15:54.141144 | orchestrator | 2025-09-18 10:15:54.141154 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-09-18 10:15:54.141234 | orchestrator | Thursday 18 September 2025 10:15:49 +0000 (0:00:01.288) 0:00:06.942 **** 2025-09-18 10:15:54.141245 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:15:54.141258 | orchestrator | 2025-09-18 10:15:54.141268 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-09-18 10:15:54.141277 | orchestrator | Thursday 18 September 2025 10:15:49 +0000 (0:00:00.249) 0:00:07.191 **** 2025-09-18 10:15:54.141287 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:54.141297 | orchestrator | changed: [testbed-manager] 2025-09-18 10:15:54.141307 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:54.141316 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:54.141326 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:15:54.141335 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:15:54.141344 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:15:54.141354 | orchestrator | 2025-09-18 10:15:54.141371 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-09-18 10:15:54.141380 | orchestrator | Thursday 18 September 2025 10:15:51 +0000 (0:00:01.923) 0:00:09.115 **** 2025-09-18 10:15:54.141390 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:15:54.141401 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:15:54.141413 | orchestrator | 2025-09-18 10:15:54.141427 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-09-18 10:15:54.141437 | orchestrator | Thursday 18 September 2025 10:15:51 +0000 (0:00:00.262) 0:00:09.378 **** 2025-09-18 10:15:54.141447 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:54.141456 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:54.141466 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:15:54.141475 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:54.141484 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:15:54.141494 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:15:54.141503 | orchestrator | 2025-09-18 10:15:54.141513 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-09-18 10:15:54.141522 | orchestrator | Thursday 18 September 2025 10:15:52 +0000 (0:00:01.005) 0:00:10.383 **** 2025-09-18 10:15:54.141532 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:15:54.141541 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:15:54.141551 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:15:54.141560 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:15:54.141570 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:15:54.141579 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:15:54.141588 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:15:54.141598 | orchestrator | 2025-09-18 10:15:54.141607 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-09-18 10:15:54.141617 | orchestrator | Thursday 18 September 2025 10:15:53 +0000 (0:00:00.647) 0:00:11.030 **** 2025-09-18 10:15:54.141626 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:15:54.141636 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:15:54.141645 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:15:54.141654 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:15:54.141664 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:15:54.141673 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:15:54.141683 | orchestrator | ok: [testbed-manager] 2025-09-18 10:15:54.141692 | orchestrator | 2025-09-18 10:15:54.141702 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-09-18 10:15:54.141712 | orchestrator | Thursday 18 September 2025 10:15:54 +0000 (0:00:00.426) 0:00:11.456 **** 2025-09-18 10:15:54.141722 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:15:54.141731 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:15:54.141747 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:16:05.660056 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:16:05.660218 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:16:05.660237 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:16:05.660249 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:16:05.660261 | orchestrator | 2025-09-18 10:16:05.660274 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-09-18 10:16:05.660286 | orchestrator | Thursday 18 September 2025 10:15:54 +0000 (0:00:00.200) 0:00:11.656 **** 2025-09-18 10:16:05.660300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:05.660330 | orchestrator | 2025-09-18 10:16:05.660342 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-09-18 10:16:05.660354 | orchestrator | Thursday 18 September 2025 10:15:54 +0000 (0:00:00.289) 0:00:11.946 **** 2025-09-18 10:16:05.660393 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:05.660406 | orchestrator | 2025-09-18 10:16:05.660417 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-09-18 10:16:05.660427 | orchestrator | Thursday 18 September 2025 10:15:54 +0000 (0:00:00.267) 0:00:12.214 **** 2025-09-18 10:16:05.660438 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.660449 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.660460 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.660470 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.660480 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.660491 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.660501 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.660512 | orchestrator | 2025-09-18 10:16:05.660522 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-09-18 10:16:05.660533 | orchestrator | Thursday 18 September 2025 10:15:55 +0000 (0:00:01.170) 0:00:13.384 **** 2025-09-18 10:16:05.660544 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:16:05.660554 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:16:05.660565 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:16:05.660578 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:16:05.660590 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:16:05.660603 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:16:05.660614 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:16:05.660627 | orchestrator | 2025-09-18 10:16:05.660639 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-09-18 10:16:05.660651 | orchestrator | Thursday 18 September 2025 10:15:56 +0000 (0:00:00.219) 0:00:13.603 **** 2025-09-18 10:16:05.660663 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.660676 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.660688 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.660700 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.660712 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.660724 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.660736 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.660748 | orchestrator | 2025-09-18 10:16:05.660761 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-09-18 10:16:05.660773 | orchestrator | Thursday 18 September 2025 10:15:56 +0000 (0:00:00.509) 0:00:14.112 **** 2025-09-18 10:16:05.660785 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:16:05.660798 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:16:05.660810 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:16:05.660823 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:16:05.660835 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:16:05.660848 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:16:05.660860 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:16:05.660872 | orchestrator | 2025-09-18 10:16:05.660884 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-09-18 10:16:05.660897 | orchestrator | Thursday 18 September 2025 10:15:56 +0000 (0:00:00.292) 0:00:14.405 **** 2025-09-18 10:16:05.660910 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.660921 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:05.660933 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:05.660944 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:05.660954 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:05.660964 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:05.660975 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:05.660985 | orchestrator | 2025-09-18 10:16:05.660996 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-09-18 10:16:05.661007 | orchestrator | Thursday 18 September 2025 10:15:57 +0000 (0:00:00.515) 0:00:14.920 **** 2025-09-18 10:16:05.661025 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661036 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:05.661046 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:05.661056 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:05.661067 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:05.661077 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:05.661088 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:05.661098 | orchestrator | 2025-09-18 10:16:05.661109 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-09-18 10:16:05.661119 | orchestrator | Thursday 18 September 2025 10:15:58 +0000 (0:00:01.076) 0:00:15.997 **** 2025-09-18 10:16:05.661130 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.661140 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661184 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.661196 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.661208 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.661219 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.661229 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.661240 | orchestrator | 2025-09-18 10:16:05.661251 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-09-18 10:16:05.661262 | orchestrator | Thursday 18 September 2025 10:15:59 +0000 (0:00:01.081) 0:00:17.078 **** 2025-09-18 10:16:05.661292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:05.661304 | orchestrator | 2025-09-18 10:16:05.661315 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-09-18 10:16:05.661325 | orchestrator | Thursday 18 September 2025 10:16:00 +0000 (0:00:00.400) 0:00:17.479 **** 2025-09-18 10:16:05.661336 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:16:05.661347 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:05.661357 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:05.661368 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:05.661378 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:05.661389 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:05.661399 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:05.661410 | orchestrator | 2025-09-18 10:16:05.661421 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-09-18 10:16:05.661431 | orchestrator | Thursday 18 September 2025 10:16:01 +0000 (0:00:01.142) 0:00:18.622 **** 2025-09-18 10:16:05.661442 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661453 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.661463 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.661474 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.661484 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.661495 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.661505 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.661516 | orchestrator | 2025-09-18 10:16:05.661527 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-09-18 10:16:05.661537 | orchestrator | Thursday 18 September 2025 10:16:01 +0000 (0:00:00.215) 0:00:18.837 **** 2025-09-18 10:16:05.661548 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661559 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.661569 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.661579 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.661590 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.661601 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.661611 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.661622 | orchestrator | 2025-09-18 10:16:05.661633 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-09-18 10:16:05.661643 | orchestrator | Thursday 18 September 2025 10:16:01 +0000 (0:00:00.209) 0:00:19.047 **** 2025-09-18 10:16:05.661654 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661665 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.661682 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.661693 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.661703 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.661714 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.661724 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.661735 | orchestrator | 2025-09-18 10:16:05.661746 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-09-18 10:16:05.661757 | orchestrator | Thursday 18 September 2025 10:16:01 +0000 (0:00:00.202) 0:00:19.250 **** 2025-09-18 10:16:05.661811 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:05.661825 | orchestrator | 2025-09-18 10:16:05.661836 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-09-18 10:16:05.661847 | orchestrator | Thursday 18 September 2025 10:16:02 +0000 (0:00:00.280) 0:00:19.530 **** 2025-09-18 10:16:05.661858 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.661869 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.661888 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.661908 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.661926 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.661946 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.661963 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.661980 | orchestrator | 2025-09-18 10:16:05.662005 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-09-18 10:16:05.662078 | orchestrator | Thursday 18 September 2025 10:16:02 +0000 (0:00:00.511) 0:00:20.042 **** 2025-09-18 10:16:05.662097 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:16:05.662116 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:16:05.662133 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:16:05.662177 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:16:05.662194 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:16:05.662213 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:16:05.662233 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:16:05.662250 | orchestrator | 2025-09-18 10:16:05.662269 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-09-18 10:16:05.662281 | orchestrator | Thursday 18 September 2025 10:16:02 +0000 (0:00:00.237) 0:00:20.279 **** 2025-09-18 10:16:05.662292 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.662302 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.662313 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.662324 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.662334 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:05.662345 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:05.662356 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:05.662366 | orchestrator | 2025-09-18 10:16:05.662377 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-09-18 10:16:05.662387 | orchestrator | Thursday 18 September 2025 10:16:03 +0000 (0:00:01.056) 0:00:21.336 **** 2025-09-18 10:16:05.662398 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.662409 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.662420 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.662430 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.662441 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:05.662451 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:05.662462 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:05.662472 | orchestrator | 2025-09-18 10:16:05.662483 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-09-18 10:16:05.662494 | orchestrator | Thursday 18 September 2025 10:16:04 +0000 (0:00:00.543) 0:00:21.879 **** 2025-09-18 10:16:05.662504 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:05.662515 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:05.662526 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:05.662536 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:05.662569 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.146941 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147016 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147023 | orchestrator | 2025-09-18 10:16:48.147028 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-09-18 10:16:48.147033 | orchestrator | Thursday 18 September 2025 10:16:05 +0000 (0:00:01.218) 0:00:23.097 **** 2025-09-18 10:16:48.147038 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147042 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147046 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147050 | orchestrator | changed: [testbed-manager] 2025-09-18 10:16:48.147054 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147058 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147061 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.147065 | orchestrator | 2025-09-18 10:16:48.147069 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-09-18 10:16:48.147073 | orchestrator | Thursday 18 September 2025 10:16:23 +0000 (0:00:18.075) 0:00:41.173 **** 2025-09-18 10:16:48.147077 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147080 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147084 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147088 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147092 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147095 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147099 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147103 | orchestrator | 2025-09-18 10:16:48.147106 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-09-18 10:16:48.147110 | orchestrator | Thursday 18 September 2025 10:16:23 +0000 (0:00:00.213) 0:00:41.387 **** 2025-09-18 10:16:48.147114 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147118 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147137 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147141 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147145 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147148 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147152 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147156 | orchestrator | 2025-09-18 10:16:48.147160 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-09-18 10:16:48.147164 | orchestrator | Thursday 18 September 2025 10:16:24 +0000 (0:00:00.203) 0:00:41.590 **** 2025-09-18 10:16:48.147167 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147171 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147175 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147179 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147183 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147187 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147190 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147194 | orchestrator | 2025-09-18 10:16:48.147198 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-09-18 10:16:48.147202 | orchestrator | Thursday 18 September 2025 10:16:24 +0000 (0:00:00.221) 0:00:41.811 **** 2025-09-18 10:16:48.147208 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:48.147213 | orchestrator | 2025-09-18 10:16:48.147217 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-09-18 10:16:48.147221 | orchestrator | Thursday 18 September 2025 10:16:24 +0000 (0:00:00.259) 0:00:42.071 **** 2025-09-18 10:16:48.147225 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147228 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147232 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147236 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147240 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147244 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147266 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147270 | orchestrator | 2025-09-18 10:16:48.147273 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-09-18 10:16:48.147287 | orchestrator | Thursday 18 September 2025 10:16:26 +0000 (0:00:01.931) 0:00:44.002 **** 2025-09-18 10:16:48.147291 | orchestrator | changed: [testbed-manager] 2025-09-18 10:16:48.147295 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:48.147299 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:48.147302 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:48.147306 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.147310 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147313 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147317 | orchestrator | 2025-09-18 10:16:48.147321 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-09-18 10:16:48.147324 | orchestrator | Thursday 18 September 2025 10:16:27 +0000 (0:00:01.149) 0:00:45.152 **** 2025-09-18 10:16:48.147329 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147332 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147336 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147340 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147344 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147347 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147351 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147355 | orchestrator | 2025-09-18 10:16:48.147358 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-09-18 10:16:48.147362 | orchestrator | Thursday 18 September 2025 10:16:29 +0000 (0:00:01.629) 0:00:46.782 **** 2025-09-18 10:16:48.147367 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:48.147372 | orchestrator | 2025-09-18 10:16:48.147376 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-09-18 10:16:48.147381 | orchestrator | Thursday 18 September 2025 10:16:29 +0000 (0:00:00.275) 0:00:47.057 **** 2025-09-18 10:16:48.147384 | orchestrator | changed: [testbed-manager] 2025-09-18 10:16:48.147388 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:48.147392 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:48.147395 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:48.147399 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147403 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.147406 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147410 | orchestrator | 2025-09-18 10:16:48.147423 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-09-18 10:16:48.147427 | orchestrator | Thursday 18 September 2025 10:16:30 +0000 (0:00:00.976) 0:00:48.034 **** 2025-09-18 10:16:48.147431 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:16:48.147435 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:16:48.147438 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:16:48.147442 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:16:48.147446 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:16:48.147449 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:16:48.147453 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:16:48.147457 | orchestrator | 2025-09-18 10:16:48.147460 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-09-18 10:16:48.147464 | orchestrator | Thursday 18 September 2025 10:16:30 +0000 (0:00:00.251) 0:00:48.286 **** 2025-09-18 10:16:48.147468 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:48.147471 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:48.147475 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:48.147479 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147482 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147486 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.147490 | orchestrator | changed: [testbed-manager] 2025-09-18 10:16:48.147499 | orchestrator | 2025-09-18 10:16:48.147503 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-09-18 10:16:48.147507 | orchestrator | Thursday 18 September 2025 10:16:42 +0000 (0:00:11.322) 0:00:59.608 **** 2025-09-18 10:16:48.147511 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147515 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147519 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147524 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147528 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147532 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147536 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147540 | orchestrator | 2025-09-18 10:16:48.147545 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-09-18 10:16:48.147549 | orchestrator | Thursday 18 September 2025 10:16:43 +0000 (0:00:01.490) 0:01:01.098 **** 2025-09-18 10:16:48.147553 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147557 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147561 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147565 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147569 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147573 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147578 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147582 | orchestrator | 2025-09-18 10:16:48.147586 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-09-18 10:16:48.147590 | orchestrator | Thursday 18 September 2025 10:16:44 +0000 (0:00:01.023) 0:01:02.122 **** 2025-09-18 10:16:48.147594 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147598 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147603 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147607 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147611 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147615 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147619 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147623 | orchestrator | 2025-09-18 10:16:48.147627 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-09-18 10:16:48.147631 | orchestrator | Thursday 18 September 2025 10:16:44 +0000 (0:00:00.198) 0:01:02.320 **** 2025-09-18 10:16:48.147635 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147640 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147644 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147648 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147652 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147656 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147660 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147664 | orchestrator | 2025-09-18 10:16:48.147668 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-09-18 10:16:48.147673 | orchestrator | Thursday 18 September 2025 10:16:45 +0000 (0:00:00.207) 0:01:02.527 **** 2025-09-18 10:16:48.147678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:16:48.147682 | orchestrator | 2025-09-18 10:16:48.147686 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-09-18 10:16:48.147691 | orchestrator | Thursday 18 September 2025 10:16:45 +0000 (0:00:00.282) 0:01:02.810 **** 2025-09-18 10:16:48.147695 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147699 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147703 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147707 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147711 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147716 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147720 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147724 | orchestrator | 2025-09-18 10:16:48.147728 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-09-18 10:16:48.147736 | orchestrator | Thursday 18 September 2025 10:16:47 +0000 (0:00:01.915) 0:01:04.726 **** 2025-09-18 10:16:48.147740 | orchestrator | changed: [testbed-manager] 2025-09-18 10:16:48.147744 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:16:48.147748 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:16:48.147752 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:16:48.147757 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:16:48.147761 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:16:48.147766 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:16:48.147770 | orchestrator | 2025-09-18 10:16:48.147774 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-09-18 10:16:48.147778 | orchestrator | Thursday 18 September 2025 10:16:47 +0000 (0:00:00.608) 0:01:05.334 **** 2025-09-18 10:16:48.147782 | orchestrator | ok: [testbed-manager] 2025-09-18 10:16:48.147787 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:16:48.147791 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:16:48.147795 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:16:48.147799 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:16:48.147803 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:16:48.147807 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:16:48.147811 | orchestrator | 2025-09-18 10:16:48.147818 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-09-18 10:19:08.279099 | orchestrator | Thursday 18 September 2025 10:16:48 +0000 (0:00:00.246) 0:01:05.580 **** 2025-09-18 10:19:08.279220 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:08.279238 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:08.279251 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:08.279262 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:08.279273 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:08.279284 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:08.279294 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:08.279306 | orchestrator | 2025-09-18 10:19:08.279318 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-09-18 10:19:08.279329 | orchestrator | Thursday 18 September 2025 10:16:49 +0000 (0:00:01.148) 0:01:06.729 **** 2025-09-18 10:19:08.279340 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:08.279351 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:08.279362 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:08.279373 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:08.279384 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:08.279394 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:08.279405 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:08.279416 | orchestrator | 2025-09-18 10:19:08.279427 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-09-18 10:19:08.279438 | orchestrator | Thursday 18 September 2025 10:16:51 +0000 (0:00:02.020) 0:01:08.749 **** 2025-09-18 10:19:08.279449 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:08.279460 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:08.279470 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:08.279481 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:08.279492 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:08.279502 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:08.279513 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:08.279524 | orchestrator | 2025-09-18 10:19:08.279535 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-09-18 10:19:08.279545 | orchestrator | Thursday 18 September 2025 10:16:54 +0000 (0:00:02.855) 0:01:11.605 **** 2025-09-18 10:19:08.279556 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:08.279567 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:08.279578 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:08.279591 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:08.279603 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:08.279615 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:08.279627 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:08.279639 | orchestrator | 2025-09-18 10:19:08.279652 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-09-18 10:19:08.279692 | orchestrator | Thursday 18 September 2025 10:17:33 +0000 (0:00:39.375) 0:01:50.981 **** 2025-09-18 10:19:08.279723 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:08.279736 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:08.279749 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:08.279761 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:08.279773 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:08.279785 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:08.279797 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:08.279810 | orchestrator | 2025-09-18 10:19:08.279823 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-09-18 10:19:08.279835 | orchestrator | Thursday 18 September 2025 10:18:54 +0000 (0:01:21.105) 0:03:12.087 **** 2025-09-18 10:19:08.279848 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:08.279860 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:08.279873 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:08.279885 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:08.279897 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:08.279909 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:08.279921 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:08.279934 | orchestrator | 2025-09-18 10:19:08.279945 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-09-18 10:19:08.279957 | orchestrator | Thursday 18 September 2025 10:18:56 +0000 (0:00:01.893) 0:03:13.980 **** 2025-09-18 10:19:08.279990 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:08.280002 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:08.280013 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:08.280028 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:08.280039 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:08.280049 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:08.280060 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:08.280071 | orchestrator | 2025-09-18 10:19:08.280081 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-09-18 10:19:08.280092 | orchestrator | Thursday 18 September 2025 10:19:07 +0000 (0:00:10.724) 0:03:24.704 **** 2025-09-18 10:19:08.280112 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-09-18 10:19:08.280129 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-09-18 10:19:08.280166 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-09-18 10:19:08.280185 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-09-18 10:19:08.280247 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-09-18 10:19:08.280259 | orchestrator | 2025-09-18 10:19:08.280270 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-09-18 10:19:08.280281 | orchestrator | Thursday 18 September 2025 10:19:07 +0000 (0:00:00.305) 0:03:25.010 **** 2025-09-18 10:19:08.280292 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 10:19:08.280303 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:08.280314 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 10:19:08.280325 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 10:19:08.280336 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:19:08.280346 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:19:08.280357 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-09-18 10:19:08.280368 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:19:08.280379 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:19:08.280390 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:19:08.280400 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:19:08.280411 | orchestrator | 2025-09-18 10:19:08.280422 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-09-18 10:19:08.280433 | orchestrator | Thursday 18 September 2025 10:19:08 +0000 (0:00:00.609) 0:03:25.619 **** 2025-09-18 10:19:08.280443 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 10:19:08.280455 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 10:19:08.280465 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 10:19:08.280476 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 10:19:08.280487 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 10:19:08.280503 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 10:19:08.280514 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 10:19:08.280525 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 10:19:08.280536 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 10:19:08.280547 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 10:19:08.280558 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 10:19:08.280568 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 10:19:08.280579 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 10:19:08.280590 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 10:19:08.280601 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 10:19:08.280611 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 10:19:08.280622 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 10:19:08.280640 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 10:19:08.280650 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 10:19:08.280661 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 10:19:08.280678 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 10:19:16.694118 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 10:19:16.694214 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:16.694230 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 10:19:16.694242 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 10:19:16.694254 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 10:19:16.694265 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 10:19:16.694276 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-09-18 10:19:16.694287 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 10:19:16.694298 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-09-18 10:19:16.694309 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 10:19:16.694321 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-09-18 10:19:16.694332 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-09-18 10:19:16.694342 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 10:19:16.694353 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-09-18 10:19:16.694364 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 10:19:16.694375 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-09-18 10:19:16.694387 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:19:16.694398 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-09-18 10:19:16.694409 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-09-18 10:19:16.694420 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-09-18 10:19:16.694431 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-09-18 10:19:16.694443 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:19:16.694454 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:19:16.694465 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 10:19:16.694476 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 10:19:16.694487 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-09-18 10:19:16.694498 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 10:19:16.694509 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 10:19:16.694535 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-09-18 10:19:16.694547 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 10:19:16.694578 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 10:19:16.694590 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-09-18 10:19:16.694604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694617 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694629 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694641 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 10:19:16.694653 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694665 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 10:19:16.694677 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694690 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-09-18 10:19:16.694702 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 10:19:16.694714 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 10:19:16.694726 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-09-18 10:19:16.694738 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 10:19:16.694768 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 10:19:16.694781 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 10:19:16.694793 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 10:19:16.694806 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 10:19:16.694819 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 10:19:16.694831 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-09-18 10:19:16.694843 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-09-18 10:19:16.694855 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-09-18 10:19:16.694868 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-09-18 10:19:16.694880 | orchestrator | 2025-09-18 10:19:16.694893 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-09-18 10:19:16.694906 | orchestrator | Thursday 18 September 2025 10:19:14 +0000 (0:00:06.698) 0:03:32.317 **** 2025-09-18 10:19:16.694919 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.694930 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.694941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.694972 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.694984 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.694994 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.695005 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-09-18 10:19:16.695016 | orchestrator | 2025-09-18 10:19:16.695027 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-09-18 10:19:16.695045 | orchestrator | Thursday 18 September 2025 10:19:15 +0000 (0:00:00.548) 0:03:32.866 **** 2025-09-18 10:19:16.695056 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 10:19:16.695067 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:16.695078 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 10:19:16.695089 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 10:19:16.695100 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:19:16.695110 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:19:16.695121 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-09-18 10:19:16.695132 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:19:16.695143 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 10:19:16.695154 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 10:19:16.695165 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-09-18 10:19:16.695176 | orchestrator | 2025-09-18 10:19:16.695194 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-09-18 10:19:16.695205 | orchestrator | Thursday 18 September 2025 10:19:15 +0000 (0:00:00.475) 0:03:33.342 **** 2025-09-18 10:19:16.695216 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 10:19:16.695227 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:16.695238 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 10:19:16.695248 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 10:19:16.695259 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:19:16.695270 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:19:16.695281 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-09-18 10:19:16.695292 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:19:16.695302 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 10:19:16.695313 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 10:19:16.695324 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-09-18 10:19:16.695335 | orchestrator | 2025-09-18 10:19:16.695345 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-09-18 10:19:16.695356 | orchestrator | Thursday 18 September 2025 10:19:16 +0000 (0:00:00.524) 0:03:33.867 **** 2025-09-18 10:19:16.695367 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:16.695378 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:19:16.695389 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:19:16.695399 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:19:16.695416 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:19:27.920285 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:19:27.920397 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:19:27.920413 | orchestrator | 2025-09-18 10:19:27.920426 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-09-18 10:19:27.920439 | orchestrator | Thursday 18 September 2025 10:19:16 +0000 (0:00:00.272) 0:03:34.140 **** 2025-09-18 10:19:27.920450 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:27.920462 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:27.920474 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:27.920485 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:27.920523 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:27.920535 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:27.920545 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:27.920555 | orchestrator | 2025-09-18 10:19:27.920566 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-09-18 10:19:27.920577 | orchestrator | Thursday 18 September 2025 10:19:22 +0000 (0:00:05.316) 0:03:39.457 **** 2025-09-18 10:19:27.920587 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-09-18 10:19:27.920598 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:27.920609 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-09-18 10:19:27.920619 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:19:27.920630 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-09-18 10:19:27.920640 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:19:27.920651 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-09-18 10:19:27.920661 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:19:27.920672 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-09-18 10:19:27.920682 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-09-18 10:19:27.920693 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:19:27.920707 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:19:27.920717 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-09-18 10:19:27.920728 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:19:27.920739 | orchestrator | 2025-09-18 10:19:27.920749 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-09-18 10:19:27.920760 | orchestrator | Thursday 18 September 2025 10:19:22 +0000 (0:00:00.303) 0:03:39.761 **** 2025-09-18 10:19:27.920771 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-09-18 10:19:27.920781 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-09-18 10:19:27.920792 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-09-18 10:19:27.920803 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-09-18 10:19:27.920814 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-09-18 10:19:27.920827 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-09-18 10:19:27.920839 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-09-18 10:19:27.920850 | orchestrator | 2025-09-18 10:19:27.920863 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-09-18 10:19:27.920875 | orchestrator | Thursday 18 September 2025 10:19:23 +0000 (0:00:00.954) 0:03:40.715 **** 2025-09-18 10:19:27.920888 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:19:27.920904 | orchestrator | 2025-09-18 10:19:27.920917 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-09-18 10:19:27.920930 | orchestrator | Thursday 18 September 2025 10:19:23 +0000 (0:00:00.451) 0:03:41.167 **** 2025-09-18 10:19:27.920965 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:27.920978 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:27.920990 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:27.921001 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:27.921013 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:27.921025 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:27.921037 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:27.921049 | orchestrator | 2025-09-18 10:19:27.921075 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-09-18 10:19:27.921088 | orchestrator | Thursday 18 September 2025 10:19:24 +0000 (0:00:01.278) 0:03:42.445 **** 2025-09-18 10:19:27.921101 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:27.921112 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:27.921124 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:27.921136 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:27.921148 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:27.921160 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:27.921181 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:27.921192 | orchestrator | 2025-09-18 10:19:27.921203 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-09-18 10:19:27.921213 | orchestrator | Thursday 18 September 2025 10:19:25 +0000 (0:00:00.612) 0:03:43.058 **** 2025-09-18 10:19:27.921224 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:27.921235 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:27.921245 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:27.921256 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:27.921266 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:27.921277 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:27.921287 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:27.921298 | orchestrator | 2025-09-18 10:19:27.921308 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-09-18 10:19:27.921320 | orchestrator | Thursday 18 September 2025 10:19:26 +0000 (0:00:00.577) 0:03:43.635 **** 2025-09-18 10:19:27.921330 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:27.921341 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:27.921351 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:27.921362 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:27.921372 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:27.921383 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:27.921393 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:27.921404 | orchestrator | 2025-09-18 10:19:27.921415 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-09-18 10:19:27.921425 | orchestrator | Thursday 18 September 2025 10:19:26 +0000 (0:00:00.768) 0:03:44.404 **** 2025-09-18 10:19:27.921468 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189423.3905852, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921484 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189461.8161037, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921496 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189450.6001668, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921508 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189466.3329716, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921524 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189461.0680196, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921543 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189449.6071901, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921554 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 567, 'dev': 2049, 'nlink': 1, 'atime': 1758189457.245093, 'mtime': 1740432309.0, 'ctime': 1743685035.2598536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:27.921574 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790067 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790199 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790213 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790253 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790264 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790274 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 554, 'dev': 2049, 'nlink': 1, 'atime': 1743684808.8363404, 'mtime': 1712646062.0, 'ctime': 1743685035.2588537, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:19:43.790285 | orchestrator | 2025-09-18 10:19:43.790297 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-09-18 10:19:43.790308 | orchestrator | Thursday 18 September 2025 10:19:27 +0000 (0:00:00.952) 0:03:45.356 **** 2025-09-18 10:19:43.790318 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:43.790329 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:43.790338 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:43.790348 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:43.790357 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:43.790366 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:43.790375 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:43.790385 | orchestrator | 2025-09-18 10:19:43.790394 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-09-18 10:19:43.790404 | orchestrator | Thursday 18 September 2025 10:19:29 +0000 (0:00:01.161) 0:03:46.518 **** 2025-09-18 10:19:43.790414 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:43.790423 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:43.790432 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:43.790441 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:43.790468 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:43.790479 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:43.790488 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:43.790497 | orchestrator | 2025-09-18 10:19:43.790508 | orchestrator | TASK [osism.commons.motd : Copy issue.net file] ******************************** 2025-09-18 10:19:43.790519 | orchestrator | Thursday 18 September 2025 10:19:30 +0000 (0:00:01.227) 0:03:47.745 **** 2025-09-18 10:19:43.790529 | orchestrator | changed: [testbed-manager] 2025-09-18 10:19:43.790540 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:43.790550 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:43.790560 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:43.790570 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:43.790581 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:43.790592 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:43.790602 | orchestrator | 2025-09-18 10:19:43.790613 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-09-18 10:19:43.790624 | orchestrator | Thursday 18 September 2025 10:19:31 +0000 (0:00:01.155) 0:03:48.901 **** 2025-09-18 10:19:43.790643 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:19:43.790655 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:19:43.790665 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:19:43.790676 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:19:43.790686 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:19:43.790696 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:19:43.790707 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:19:43.790718 | orchestrator | 2025-09-18 10:19:43.790729 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-09-18 10:19:43.790740 | orchestrator | Thursday 18 September 2025 10:19:31 +0000 (0:00:00.277) 0:03:49.178 **** 2025-09-18 10:19:43.790751 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.790779 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.790790 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.790801 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:43.790811 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.790822 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.790832 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:43.790843 | orchestrator | 2025-09-18 10:19:43.790854 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-09-18 10:19:43.790865 | orchestrator | Thursday 18 September 2025 10:19:32 +0000 (0:00:00.745) 0:03:49.924 **** 2025-09-18 10:19:43.790877 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:19:43.790889 | orchestrator | 2025-09-18 10:19:43.790898 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-09-18 10:19:43.790908 | orchestrator | Thursday 18 September 2025 10:19:32 +0000 (0:00:00.363) 0:03:50.287 **** 2025-09-18 10:19:43.790934 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.790944 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:19:43.790953 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:19:43.790963 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:19:43.790972 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:19:43.790981 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:19:43.790991 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:19:43.791000 | orchestrator | 2025-09-18 10:19:43.791010 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-09-18 10:19:43.791019 | orchestrator | Thursday 18 September 2025 10:19:40 +0000 (0:00:07.907) 0:03:58.195 **** 2025-09-18 10:19:43.791033 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.791043 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.791053 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.791062 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.791071 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:43.791081 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.791090 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:43.791100 | orchestrator | 2025-09-18 10:19:43.791110 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-09-18 10:19:43.791120 | orchestrator | Thursday 18 September 2025 10:19:41 +0000 (0:00:01.113) 0:03:59.309 **** 2025-09-18 10:19:43.791129 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.791139 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.791148 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.791157 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.791166 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.791176 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:43.791185 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:43.791194 | orchestrator | 2025-09-18 10:19:43.791204 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-09-18 10:19:43.791214 | orchestrator | Thursday 18 September 2025 10:19:42 +0000 (0:00:00.958) 0:04:00.267 **** 2025-09-18 10:19:43.791223 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.791239 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.791248 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.791257 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.791267 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.791276 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:43.791285 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:43.791295 | orchestrator | 2025-09-18 10:19:43.791305 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-09-18 10:19:43.791315 | orchestrator | Thursday 18 September 2025 10:19:43 +0000 (0:00:00.289) 0:04:00.556 **** 2025-09-18 10:19:43.791324 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.791333 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.791343 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.791352 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.791361 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.791370 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:19:43.791380 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:19:43.791389 | orchestrator | 2025-09-18 10:19:43.791399 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-09-18 10:19:43.791408 | orchestrator | Thursday 18 September 2025 10:19:43 +0000 (0:00:00.408) 0:04:00.965 **** 2025-09-18 10:19:43.791418 | orchestrator | ok: [testbed-manager] 2025-09-18 10:19:43.791427 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:19:43.791436 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:19:43.791445 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:19:43.791455 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:19:43.791470 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:20:54.411307 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:20:54.411445 | orchestrator | 2025-09-18 10:20:54.411459 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-09-18 10:20:54.411469 | orchestrator | Thursday 18 September 2025 10:19:43 +0000 (0:00:00.268) 0:04:01.233 **** 2025-09-18 10:20:54.411476 | orchestrator | ok: [testbed-manager] 2025-09-18 10:20:54.411484 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:20:54.411491 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:20:54.411498 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:20:54.411506 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:20:54.411513 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:20:54.411520 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:20:54.411527 | orchestrator | 2025-09-18 10:20:54.411534 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-09-18 10:20:54.411542 | orchestrator | Thursday 18 September 2025 10:19:49 +0000 (0:00:05.596) 0:04:06.829 **** 2025-09-18 10:20:54.411551 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:20:54.411561 | orchestrator | 2025-09-18 10:20:54.411569 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-09-18 10:20:54.411576 | orchestrator | Thursday 18 September 2025 10:19:49 +0000 (0:00:00.392) 0:04:07.221 **** 2025-09-18 10:20:54.411583 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411590 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-09-18 10:20:54.411597 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411604 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-09-18 10:20:54.411611 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:20:54.411618 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411626 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:20:54.411633 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-09-18 10:20:54.411639 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411646 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:20:54.411654 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-09-18 10:20:54.411693 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411700 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:20:54.411707 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-09-18 10:20:54.411714 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411721 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-09-18 10:20:54.411728 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:20:54.411735 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:20:54.411742 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-09-18 10:20:54.411749 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-09-18 10:20:54.411757 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:20:54.411764 | orchestrator | 2025-09-18 10:20:54.411771 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-09-18 10:20:54.411778 | orchestrator | Thursday 18 September 2025 10:19:50 +0000 (0:00:00.338) 0:04:07.560 **** 2025-09-18 10:20:54.411802 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:20:54.411809 | orchestrator | 2025-09-18 10:20:54.411847 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-09-18 10:20:54.411856 | orchestrator | Thursday 18 September 2025 10:19:50 +0000 (0:00:00.402) 0:04:07.963 **** 2025-09-18 10:20:54.411865 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-09-18 10:20:54.411873 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:20:54.411882 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-09-18 10:20:54.411891 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-09-18 10:20:54.411899 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:20:54.411906 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:20:54.411914 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-09-18 10:20:54.411921 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-09-18 10:20:54.411929 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:20:54.411937 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-09-18 10:20:54.411944 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:20:54.411952 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:20:54.411960 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-09-18 10:20:54.411967 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:20:54.411975 | orchestrator | 2025-09-18 10:20:54.411982 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-09-18 10:20:54.411989 | orchestrator | Thursday 18 September 2025 10:19:50 +0000 (0:00:00.295) 0:04:08.258 **** 2025-09-18 10:20:54.411997 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:20:54.412005 | orchestrator | 2025-09-18 10:20:54.412013 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-09-18 10:20:54.412020 | orchestrator | Thursday 18 September 2025 10:19:51 +0000 (0:00:00.425) 0:04:08.684 **** 2025-09-18 10:20:54.412028 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412058 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412065 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412072 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412078 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412085 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412092 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412098 | orchestrator | 2025-09-18 10:20:54.412114 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-09-18 10:20:54.412122 | orchestrator | Thursday 18 September 2025 10:20:25 +0000 (0:00:34.566) 0:04:43.250 **** 2025-09-18 10:20:54.412129 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412136 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412142 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412149 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412156 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412162 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412167 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412173 | orchestrator | 2025-09-18 10:20:54.412179 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-09-18 10:20:54.412186 | orchestrator | Thursday 18 September 2025 10:20:34 +0000 (0:00:08.771) 0:04:52.022 **** 2025-09-18 10:20:54.412194 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412202 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412210 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412217 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412224 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412231 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412238 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412244 | orchestrator | 2025-09-18 10:20:54.412251 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-09-18 10:20:54.412258 | orchestrator | Thursday 18 September 2025 10:20:42 +0000 (0:00:08.006) 0:05:00.029 **** 2025-09-18 10:20:54.412266 | orchestrator | ok: [testbed-manager] 2025-09-18 10:20:54.412273 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:20:54.412280 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:20:54.412287 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:20:54.412294 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:20:54.412301 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:20:54.412308 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:20:54.412315 | orchestrator | 2025-09-18 10:20:54.412323 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-09-18 10:20:54.412331 | orchestrator | Thursday 18 September 2025 10:20:44 +0000 (0:00:01.734) 0:05:01.763 **** 2025-09-18 10:20:54.412338 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412345 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412352 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412358 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412365 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412371 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412378 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412385 | orchestrator | 2025-09-18 10:20:54.412392 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-09-18 10:20:54.412399 | orchestrator | Thursday 18 September 2025 10:20:50 +0000 (0:00:05.853) 0:05:07.616 **** 2025-09-18 10:20:54.412407 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:20:54.412417 | orchestrator | 2025-09-18 10:20:54.412432 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-09-18 10:20:54.412440 | orchestrator | Thursday 18 September 2025 10:20:50 +0000 (0:00:00.546) 0:05:08.163 **** 2025-09-18 10:20:54.412446 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412453 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412461 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412468 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412474 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412481 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412488 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412494 | orchestrator | 2025-09-18 10:20:54.412501 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-09-18 10:20:54.412516 | orchestrator | Thursday 18 September 2025 10:20:51 +0000 (0:00:00.726) 0:05:08.889 **** 2025-09-18 10:20:54.412523 | orchestrator | ok: [testbed-manager] 2025-09-18 10:20:54.412530 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:20:54.412537 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:20:54.412544 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:20:54.412551 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:20:54.412557 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:20:54.412564 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:20:54.412571 | orchestrator | 2025-09-18 10:20:54.412578 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-09-18 10:20:54.412585 | orchestrator | Thursday 18 September 2025 10:20:53 +0000 (0:00:01.840) 0:05:10.730 **** 2025-09-18 10:20:54.412592 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:20:54.412599 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:20:54.412605 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:20:54.412612 | orchestrator | changed: [testbed-manager] 2025-09-18 10:20:54.412618 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:20:54.412626 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:20:54.412632 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:20:54.412639 | orchestrator | 2025-09-18 10:20:54.412645 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-09-18 10:20:54.412652 | orchestrator | Thursday 18 September 2025 10:20:54 +0000 (0:00:00.850) 0:05:11.580 **** 2025-09-18 10:20:54.412659 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:20:54.412666 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:20:54.412672 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:20:54.412679 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:20:54.412686 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:20:54.412693 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:20:54.412700 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:20:54.412706 | orchestrator | 2025-09-18 10:20:54.412713 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-09-18 10:20:54.412730 | orchestrator | Thursday 18 September 2025 10:20:54 +0000 (0:00:00.270) 0:05:11.851 **** 2025-09-18 10:21:20.630399 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:21:20.630535 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:21:20.630550 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:21:20.630562 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:21:20.630573 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:21:20.630584 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:21:20.630595 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:21:20.630606 | orchestrator | 2025-09-18 10:21:20.630618 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-09-18 10:21:20.630631 | orchestrator | Thursday 18 September 2025 10:20:54 +0000 (0:00:00.355) 0:05:12.206 **** 2025-09-18 10:21:20.630642 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.630654 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:21:20.630665 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:21:20.630676 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:21:20.630686 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:21:20.630697 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:21:20.630708 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:21:20.630718 | orchestrator | 2025-09-18 10:21:20.630729 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-09-18 10:21:20.630740 | orchestrator | Thursday 18 September 2025 10:20:55 +0000 (0:00:00.287) 0:05:12.494 **** 2025-09-18 10:21:20.630752 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:21:20.630763 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:21:20.630773 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:21:20.630847 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:21:20.630860 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:21:20.630871 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:21:20.630881 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:21:20.630926 | orchestrator | 2025-09-18 10:21:20.630940 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-09-18 10:21:20.630953 | orchestrator | Thursday 18 September 2025 10:20:55 +0000 (0:00:00.266) 0:05:12.760 **** 2025-09-18 10:21:20.630965 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.630977 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:21:20.630989 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:21:20.631001 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:21:20.631013 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:21:20.631025 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:21:20.631037 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:21:20.631049 | orchestrator | 2025-09-18 10:21:20.631060 | orchestrator | TASK [osism.services.docker : Print used docker version] *********************** 2025-09-18 10:21:20.631072 | orchestrator | Thursday 18 September 2025 10:20:55 +0000 (0:00:00.270) 0:05:13.031 **** 2025-09-18 10:21:20.631084 | orchestrator | ok: [testbed-manager] =>  2025-09-18 10:21:20.631096 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631109 | orchestrator | ok: [testbed-node-3] =>  2025-09-18 10:21:20.631120 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631131 | orchestrator | ok: [testbed-node-4] =>  2025-09-18 10:21:20.631143 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631155 | orchestrator | ok: [testbed-node-5] =>  2025-09-18 10:21:20.631167 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631178 | orchestrator | ok: [testbed-node-0] =>  2025-09-18 10:21:20.631190 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631202 | orchestrator | ok: [testbed-node-1] =>  2025-09-18 10:21:20.631214 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631226 | orchestrator | ok: [testbed-node-2] =>  2025-09-18 10:21:20.631238 | orchestrator |  docker_version: 5:27.5.1 2025-09-18 10:21:20.631250 | orchestrator | 2025-09-18 10:21:20.631262 | orchestrator | TASK [osism.services.docker : Print used docker cli version] ******************* 2025-09-18 10:21:20.631274 | orchestrator | Thursday 18 September 2025 10:20:55 +0000 (0:00:00.295) 0:05:13.327 **** 2025-09-18 10:21:20.631284 | orchestrator | ok: [testbed-manager] =>  2025-09-18 10:21:20.631295 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631306 | orchestrator | ok: [testbed-node-3] =>  2025-09-18 10:21:20.631316 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631326 | orchestrator | ok: [testbed-node-4] =>  2025-09-18 10:21:20.631337 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631347 | orchestrator | ok: [testbed-node-5] =>  2025-09-18 10:21:20.631358 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631368 | orchestrator | ok: [testbed-node-0] =>  2025-09-18 10:21:20.631378 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631389 | orchestrator | ok: [testbed-node-1] =>  2025-09-18 10:21:20.631399 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631410 | orchestrator | ok: [testbed-node-2] =>  2025-09-18 10:21:20.631420 | orchestrator |  docker_cli_version: 5:27.5.1 2025-09-18 10:21:20.631430 | orchestrator | 2025-09-18 10:21:20.631441 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-09-18 10:21:20.631452 | orchestrator | Thursday 18 September 2025 10:20:56 +0000 (0:00:00.255) 0:05:13.582 **** 2025-09-18 10:21:20.631462 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:21:20.631472 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:21:20.631483 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:21:20.631493 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:21:20.631504 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:21:20.631514 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:21:20.631525 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:21:20.631535 | orchestrator | 2025-09-18 10:21:20.631546 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-09-18 10:21:20.631556 | orchestrator | Thursday 18 September 2025 10:20:56 +0000 (0:00:00.254) 0:05:13.836 **** 2025-09-18 10:21:20.631567 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:21:20.631586 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:21:20.631597 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:21:20.631608 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:21:20.631618 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:21:20.631629 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:21:20.631639 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:21:20.631650 | orchestrator | 2025-09-18 10:21:20.631661 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-09-18 10:21:20.631671 | orchestrator | Thursday 18 September 2025 10:20:56 +0000 (0:00:00.266) 0:05:14.103 **** 2025-09-18 10:21:20.631702 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:21:20.631717 | orchestrator | 2025-09-18 10:21:20.631728 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-09-18 10:21:20.631738 | orchestrator | Thursday 18 September 2025 10:20:57 +0000 (0:00:00.445) 0:05:14.548 **** 2025-09-18 10:21:20.631749 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.631760 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:21:20.631770 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:21:20.631781 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:21:20.631812 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:21:20.631823 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:21:20.631834 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:21:20.631844 | orchestrator | 2025-09-18 10:21:20.631855 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-09-18 10:21:20.631866 | orchestrator | Thursday 18 September 2025 10:20:57 +0000 (0:00:00.836) 0:05:15.385 **** 2025-09-18 10:21:20.631877 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.631887 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:21:20.631898 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:21:20.631909 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:21:20.631919 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:21:20.631930 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:21:20.631940 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:21:20.631951 | orchestrator | 2025-09-18 10:21:20.631962 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-09-18 10:21:20.631974 | orchestrator | Thursday 18 September 2025 10:21:01 +0000 (0:00:03.186) 0:05:18.571 **** 2025-09-18 10:21:20.631984 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-09-18 10:21:20.631996 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-09-18 10:21:20.632006 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-09-18 10:21:20.632017 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-09-18 10:21:20.632028 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-09-18 10:21:20.632038 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-09-18 10:21:20.632049 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:21:20.632059 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-09-18 10:21:20.632070 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-09-18 10:21:20.632080 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-09-18 10:21:20.632091 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:21:20.632122 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-09-18 10:21:20.632133 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-09-18 10:21:20.632144 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-09-18 10:21:20.632155 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:21:20.632165 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-09-18 10:21:20.632175 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-09-18 10:21:20.632186 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-09-18 10:21:20.632204 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:21:20.632215 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-09-18 10:21:20.632226 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-09-18 10:21:20.632237 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:21:20.632247 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-09-18 10:21:20.632258 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:21:20.632274 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-09-18 10:21:20.632285 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-09-18 10:21:20.632295 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-09-18 10:21:20.632306 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:21:20.632316 | orchestrator | 2025-09-18 10:21:20.632327 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-09-18 10:21:20.632338 | orchestrator | Thursday 18 September 2025 10:21:01 +0000 (0:00:00.624) 0:05:19.195 **** 2025-09-18 10:21:20.632349 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.632359 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:21:20.632370 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:21:20.632380 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:21:20.632391 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:21:20.632402 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:21:20.632412 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:21:20.632423 | orchestrator | 2025-09-18 10:21:20.632433 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-09-18 10:21:20.632444 | orchestrator | Thursday 18 September 2025 10:21:08 +0000 (0:00:06.495) 0:05:25.691 **** 2025-09-18 10:21:20.632455 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:21:20.632465 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:21:20.632476 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.632487 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:21:20.632497 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:21:20.632507 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:21:20.632518 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:21:20.632529 | orchestrator | 2025-09-18 10:21:20.632540 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-09-18 10:21:20.632550 | orchestrator | Thursday 18 September 2025 10:21:09 +0000 (0:00:01.344) 0:05:27.035 **** 2025-09-18 10:21:20.632561 | orchestrator | ok: [testbed-manager] 2025-09-18 10:21:20.632571 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:21:20.632582 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:21:20.632593 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:21:20.632603 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:21:20.632614 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:21:20.632624 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:21:20.632635 | orchestrator | 2025-09-18 10:21:20.632645 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-09-18 10:21:20.632656 | orchestrator | Thursday 18 September 2025 10:21:17 +0000 (0:00:07.958) 0:05:34.994 **** 2025-09-18 10:21:20.632667 | orchestrator | changed: [testbed-manager] 2025-09-18 10:21:20.632677 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:21:20.632688 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:21:20.632705 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.021440 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.021590 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.021607 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.021620 | orchestrator | 2025-09-18 10:22:06.021633 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-09-18 10:22:06.021646 | orchestrator | Thursday 18 September 2025 10:21:20 +0000 (0:00:03.074) 0:05:38.068 **** 2025-09-18 10:22:06.021658 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.021670 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.021681 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.021723 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.021784 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.021796 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.021807 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.021818 | orchestrator | 2025-09-18 10:22:06.021830 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-09-18 10:22:06.021841 | orchestrator | Thursday 18 September 2025 10:21:21 +0000 (0:00:01.348) 0:05:39.417 **** 2025-09-18 10:22:06.021852 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.021863 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.021873 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.021884 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.021895 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.021906 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.021916 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.021927 | orchestrator | 2025-09-18 10:22:06.021938 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-09-18 10:22:06.021952 | orchestrator | Thursday 18 September 2025 10:21:23 +0000 (0:00:01.318) 0:05:40.735 **** 2025-09-18 10:22:06.021964 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.021977 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.021990 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.022003 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.022067 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.022082 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.022095 | orchestrator | changed: [testbed-manager] 2025-09-18 10:22:06.022107 | orchestrator | 2025-09-18 10:22:06.022120 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-09-18 10:22:06.022168 | orchestrator | Thursday 18 September 2025 10:21:24 +0000 (0:00:00.765) 0:05:41.501 **** 2025-09-18 10:22:06.022181 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.022194 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.022207 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.022219 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.022232 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.022244 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.022256 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.022269 | orchestrator | 2025-09-18 10:22:06.022282 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-09-18 10:22:06.022295 | orchestrator | Thursday 18 September 2025 10:21:33 +0000 (0:00:09.712) 0:05:51.213 **** 2025-09-18 10:22:06.022306 | orchestrator | changed: [testbed-manager] 2025-09-18 10:22:06.022317 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.022327 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.022338 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.022349 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.022360 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.022370 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.022381 | orchestrator | 2025-09-18 10:22:06.022392 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-09-18 10:22:06.022421 | orchestrator | Thursday 18 September 2025 10:21:34 +0000 (0:00:01.007) 0:05:52.221 **** 2025-09-18 10:22:06.022433 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.022444 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.022454 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.022465 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.022476 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.022486 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.022497 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.022507 | orchestrator | 2025-09-18 10:22:06.022518 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-09-18 10:22:06.022529 | orchestrator | Thursday 18 September 2025 10:21:43 +0000 (0:00:09.218) 0:06:01.440 **** 2025-09-18 10:22:06.022550 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.022561 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.022572 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.022583 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.022594 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.022604 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.022615 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.022626 | orchestrator | 2025-09-18 10:22:06.022636 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-09-18 10:22:06.022647 | orchestrator | Thursday 18 September 2025 10:21:55 +0000 (0:00:11.948) 0:06:13.388 **** 2025-09-18 10:22:06.022658 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-09-18 10:22:06.022670 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-09-18 10:22:06.022680 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-09-18 10:22:06.022691 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-09-18 10:22:06.022702 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-09-18 10:22:06.022712 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-09-18 10:22:06.022723 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-09-18 10:22:06.022753 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-09-18 10:22:06.022765 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-09-18 10:22:06.022776 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-09-18 10:22:06.022787 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-09-18 10:22:06.022797 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-09-18 10:22:06.022808 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-09-18 10:22:06.022819 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-09-18 10:22:06.022830 | orchestrator | 2025-09-18 10:22:06.022841 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-09-18 10:22:06.022872 | orchestrator | Thursday 18 September 2025 10:21:57 +0000 (0:00:01.213) 0:06:14.602 **** 2025-09-18 10:22:06.022884 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.022895 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.022905 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.022916 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.022927 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.022937 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.022948 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.022959 | orchestrator | 2025-09-18 10:22:06.022970 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-09-18 10:22:06.022980 | orchestrator | Thursday 18 September 2025 10:21:57 +0000 (0:00:00.502) 0:06:15.105 **** 2025-09-18 10:22:06.022991 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.023002 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:06.023012 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:06.023023 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:06.023034 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:06.023044 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:06.023055 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:06.023066 | orchestrator | 2025-09-18 10:22:06.023077 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-09-18 10:22:06.023089 | orchestrator | Thursday 18 September 2025 10:22:01 +0000 (0:00:03.823) 0:06:18.928 **** 2025-09-18 10:22:06.023099 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.023110 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.023121 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.023132 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.023142 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.023153 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.023164 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.023183 | orchestrator | 2025-09-18 10:22:06.023194 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-09-18 10:22:06.023206 | orchestrator | Thursday 18 September 2025 10:22:01 +0000 (0:00:00.498) 0:06:19.426 **** 2025-09-18 10:22:06.023217 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-09-18 10:22:06.023228 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-09-18 10:22:06.023239 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.023249 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-09-18 10:22:06.023260 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-09-18 10:22:06.023271 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.023281 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-09-18 10:22:06.023292 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-09-18 10:22:06.023303 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.023314 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-09-18 10:22:06.023324 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-09-18 10:22:06.023335 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.023346 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-09-18 10:22:06.023356 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-09-18 10:22:06.023367 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.023377 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-09-18 10:22:06.023393 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-09-18 10:22:06.023404 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.023415 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-09-18 10:22:06.023426 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-09-18 10:22:06.023436 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.023447 | orchestrator | 2025-09-18 10:22:06.023458 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-09-18 10:22:06.023468 | orchestrator | Thursday 18 September 2025 10:22:02 +0000 (0:00:00.713) 0:06:20.140 **** 2025-09-18 10:22:06.023479 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.023490 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.023500 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.023511 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.023522 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.023532 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.023543 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.023554 | orchestrator | 2025-09-18 10:22:06.023564 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-09-18 10:22:06.023576 | orchestrator | Thursday 18 September 2025 10:22:03 +0000 (0:00:00.523) 0:06:20.664 **** 2025-09-18 10:22:06.023586 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.023597 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.023607 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.023618 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.023629 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.023639 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.023650 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.023660 | orchestrator | 2025-09-18 10:22:06.023671 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-09-18 10:22:06.023682 | orchestrator | Thursday 18 September 2025 10:22:03 +0000 (0:00:00.476) 0:06:21.141 **** 2025-09-18 10:22:06.023693 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:06.023703 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:06.023714 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:06.023725 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:06.023762 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:06.023781 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:06.023792 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:06.023802 | orchestrator | 2025-09-18 10:22:06.023813 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-09-18 10:22:06.023824 | orchestrator | Thursday 18 September 2025 10:22:04 +0000 (0:00:00.515) 0:06:21.656 **** 2025-09-18 10:22:06.023835 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:06.023853 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.714687 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.714880 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.714894 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.714904 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.714914 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.714924 | orchestrator | 2025-09-18 10:22:27.714936 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-09-18 10:22:27.714948 | orchestrator | Thursday 18 September 2025 10:22:06 +0000 (0:00:01.807) 0:06:23.464 **** 2025-09-18 10:22:27.714959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:22:27.714971 | orchestrator | 2025-09-18 10:22:27.714981 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-09-18 10:22:27.714990 | orchestrator | Thursday 18 September 2025 10:22:06 +0000 (0:00:00.874) 0:06:24.338 **** 2025-09-18 10:22:27.715000 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715009 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.715020 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.715029 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.715039 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.715048 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.715057 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.715067 | orchestrator | 2025-09-18 10:22:27.715076 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-09-18 10:22:27.715086 | orchestrator | Thursday 18 September 2025 10:22:07 +0000 (0:00:00.865) 0:06:25.203 **** 2025-09-18 10:22:27.715095 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715105 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.715114 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.715123 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.715134 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.715143 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.715152 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.715162 | orchestrator | 2025-09-18 10:22:27.715171 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-09-18 10:22:27.715181 | orchestrator | Thursday 18 September 2025 10:22:08 +0000 (0:00:00.930) 0:06:26.134 **** 2025-09-18 10:22:27.715190 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715199 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.715209 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.715218 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.715227 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.715237 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.715246 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.715256 | orchestrator | 2025-09-18 10:22:27.715265 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-09-18 10:22:27.715276 | orchestrator | Thursday 18 September 2025 10:22:09 +0000 (0:00:01.258) 0:06:27.392 **** 2025-09-18 10:22:27.715285 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:27.715295 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.715304 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.715314 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.715323 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.715332 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.715368 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.715378 | orchestrator | 2025-09-18 10:22:27.715388 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-09-18 10:22:27.715418 | orchestrator | Thursday 18 September 2025 10:22:11 +0000 (0:00:01.431) 0:06:28.824 **** 2025-09-18 10:22:27.715428 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715438 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.715447 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.715457 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.715466 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.715476 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.715485 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.715494 | orchestrator | 2025-09-18 10:22:27.715504 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-09-18 10:22:27.715514 | orchestrator | Thursday 18 September 2025 10:22:12 +0000 (0:00:01.223) 0:06:30.047 **** 2025-09-18 10:22:27.715523 | orchestrator | changed: [testbed-manager] 2025-09-18 10:22:27.715532 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.715542 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.715551 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.715561 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.715570 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.715579 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.715588 | orchestrator | 2025-09-18 10:22:27.715598 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-09-18 10:22:27.715607 | orchestrator | Thursday 18 September 2025 10:22:13 +0000 (0:00:01.328) 0:06:31.376 **** 2025-09-18 10:22:27.715617 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:22:27.715627 | orchestrator | 2025-09-18 10:22:27.715636 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-09-18 10:22:27.715646 | orchestrator | Thursday 18 September 2025 10:22:14 +0000 (0:00:01.055) 0:06:32.431 **** 2025-09-18 10:22:27.715655 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715665 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.715675 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.715685 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.715694 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.715703 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.715729 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.715739 | orchestrator | 2025-09-18 10:22:27.715749 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-09-18 10:22:27.715758 | orchestrator | Thursday 18 September 2025 10:22:16 +0000 (0:00:01.401) 0:06:33.833 **** 2025-09-18 10:22:27.715768 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715777 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.715806 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.715816 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.715825 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.715835 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.715844 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.715854 | orchestrator | 2025-09-18 10:22:27.715863 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-09-18 10:22:27.715873 | orchestrator | Thursday 18 September 2025 10:22:17 +0000 (0:00:01.176) 0:06:35.009 **** 2025-09-18 10:22:27.715882 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715892 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.715901 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.715910 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.715920 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.715929 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.715938 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.715948 | orchestrator | 2025-09-18 10:22:27.715957 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-09-18 10:22:27.715975 | orchestrator | Thursday 18 September 2025 10:22:18 +0000 (0:00:01.172) 0:06:36.181 **** 2025-09-18 10:22:27.715985 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.715994 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:27.716004 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:27.716013 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:27.716023 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.716032 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.716042 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.716051 | orchestrator | 2025-09-18 10:22:27.716061 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-09-18 10:22:27.716071 | orchestrator | Thursday 18 September 2025 10:22:19 +0000 (0:00:01.220) 0:06:37.401 **** 2025-09-18 10:22:27.716080 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:22:27.716090 | orchestrator | 2025-09-18 10:22:27.716100 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716110 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:01.062) 0:06:38.464 **** 2025-09-18 10:22:27.716119 | orchestrator | 2025-09-18 10:22:27.716129 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716138 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.038) 0:06:38.503 **** 2025-09-18 10:22:27.716148 | orchestrator | 2025-09-18 10:22:27.716157 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716167 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.039) 0:06:38.543 **** 2025-09-18 10:22:27.716176 | orchestrator | 2025-09-18 10:22:27.716186 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716195 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.045) 0:06:38.588 **** 2025-09-18 10:22:27.716204 | orchestrator | 2025-09-18 10:22:27.716214 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716223 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.038) 0:06:38.626 **** 2025-09-18 10:22:27.716233 | orchestrator | 2025-09-18 10:22:27.716242 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716252 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.037) 0:06:38.664 **** 2025-09-18 10:22:27.716261 | orchestrator | 2025-09-18 10:22:27.716271 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-09-18 10:22:27.716281 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.044) 0:06:38.708 **** 2025-09-18 10:22:27.716290 | orchestrator | 2025-09-18 10:22:27.716300 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-09-18 10:22:27.716309 | orchestrator | Thursday 18 September 2025 10:22:21 +0000 (0:00:00.037) 0:06:38.746 **** 2025-09-18 10:22:27.716319 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:27.716328 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:27.716338 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:27.716347 | orchestrator | 2025-09-18 10:22:27.716357 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-09-18 10:22:27.716367 | orchestrator | Thursday 18 September 2025 10:22:22 +0000 (0:00:01.176) 0:06:39.923 **** 2025-09-18 10:22:27.716376 | orchestrator | changed: [testbed-manager] 2025-09-18 10:22:27.716386 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.716395 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.716405 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.716414 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.716424 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.716433 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.716443 | orchestrator | 2025-09-18 10:22:27.716452 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-09-18 10:22:27.716468 | orchestrator | Thursday 18 September 2025 10:22:23 +0000 (0:00:01.408) 0:06:41.332 **** 2025-09-18 10:22:27.716477 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:27.716487 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.716496 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:27.716506 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.716515 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.716525 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:27.716534 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:27.716544 | orchestrator | 2025-09-18 10:22:27.716554 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-09-18 10:22:27.716564 | orchestrator | Thursday 18 September 2025 10:22:26 +0000 (0:00:02.573) 0:06:43.905 **** 2025-09-18 10:22:27.716573 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:27.716583 | orchestrator | 2025-09-18 10:22:27.716592 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-09-18 10:22:27.716602 | orchestrator | Thursday 18 September 2025 10:22:26 +0000 (0:00:00.109) 0:06:44.014 **** 2025-09-18 10:22:27.716611 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:27.716621 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:27.716630 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:27.716639 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:27.716655 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:53.663902 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:53.664043 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:53.664059 | orchestrator | 2025-09-18 10:22:53.664095 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-09-18 10:22:53.664109 | orchestrator | Thursday 18 September 2025 10:22:27 +0000 (0:00:01.139) 0:06:45.153 **** 2025-09-18 10:22:53.664122 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.664133 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.664144 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.664155 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.664166 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.664177 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.664187 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.664198 | orchestrator | 2025-09-18 10:22:53.664209 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-09-18 10:22:53.664220 | orchestrator | Thursday 18 September 2025 10:22:28 +0000 (0:00:00.531) 0:06:45.684 **** 2025-09-18 10:22:53.664232 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:22:53.664246 | orchestrator | 2025-09-18 10:22:53.664257 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-09-18 10:22:53.664269 | orchestrator | Thursday 18 September 2025 10:22:29 +0000 (0:00:01.048) 0:06:46.733 **** 2025-09-18 10:22:53.664280 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.664292 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:53.664303 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:53.664314 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:53.664324 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:53.664335 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:53.664345 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:53.664356 | orchestrator | 2025-09-18 10:22:53.664367 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-09-18 10:22:53.664377 | orchestrator | Thursday 18 September 2025 10:22:30 +0000 (0:00:00.844) 0:06:47.577 **** 2025-09-18 10:22:53.664389 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-09-18 10:22:53.664402 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-09-18 10:22:53.664415 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-09-18 10:22:53.664458 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-09-18 10:22:53.664471 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-09-18 10:22:53.664484 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-09-18 10:22:53.664497 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-09-18 10:22:53.664508 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-09-18 10:22:53.664518 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-09-18 10:22:53.664529 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-09-18 10:22:53.664540 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-09-18 10:22:53.664550 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-09-18 10:22:53.664561 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-09-18 10:22:53.664577 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-09-18 10:22:53.664588 | orchestrator | 2025-09-18 10:22:53.664599 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-09-18 10:22:53.664609 | orchestrator | Thursday 18 September 2025 10:22:32 +0000 (0:00:02.579) 0:06:50.157 **** 2025-09-18 10:22:53.664620 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.664630 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.664641 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.664652 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.664662 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.664673 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.664710 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.664722 | orchestrator | 2025-09-18 10:22:53.664733 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-09-18 10:22:53.664743 | orchestrator | Thursday 18 September 2025 10:22:33 +0000 (0:00:00.476) 0:06:50.634 **** 2025-09-18 10:22:53.664757 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:22:53.664770 | orchestrator | 2025-09-18 10:22:53.664781 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-09-18 10:22:53.664792 | orchestrator | Thursday 18 September 2025 10:22:34 +0000 (0:00:00.981) 0:06:51.616 **** 2025-09-18 10:22:53.664802 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.664813 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:53.664824 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:53.664834 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:53.664845 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:53.664855 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:53.664866 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:53.664877 | orchestrator | 2025-09-18 10:22:53.664887 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-09-18 10:22:53.664898 | orchestrator | Thursday 18 September 2025 10:22:35 +0000 (0:00:00.837) 0:06:52.454 **** 2025-09-18 10:22:53.664909 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.664920 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:53.664930 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:53.664941 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:53.664951 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:53.664961 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:53.664972 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:53.664982 | orchestrator | 2025-09-18 10:22:53.664993 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-09-18 10:22:53.665026 | orchestrator | Thursday 18 September 2025 10:22:35 +0000 (0:00:00.818) 0:06:53.272 **** 2025-09-18 10:22:53.665038 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.665049 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.665059 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.665094 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.665115 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.665144 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.665161 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.665178 | orchestrator | 2025-09-18 10:22:53.665196 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-09-18 10:22:53.665213 | orchestrator | Thursday 18 September 2025 10:22:36 +0000 (0:00:00.493) 0:06:53.765 **** 2025-09-18 10:22:53.665229 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665246 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:53.665263 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:53.665282 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:53.665299 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:53.665317 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:53.665333 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:53.665353 | orchestrator | 2025-09-18 10:22:53.665372 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-09-18 10:22:53.665391 | orchestrator | Thursday 18 September 2025 10:22:38 +0000 (0:00:01.692) 0:06:55.457 **** 2025-09-18 10:22:53.665405 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.665416 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.665427 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.665438 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.665448 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.665458 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.665469 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.665479 | orchestrator | 2025-09-18 10:22:53.665490 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-09-18 10:22:53.665500 | orchestrator | Thursday 18 September 2025 10:22:38 +0000 (0:00:00.469) 0:06:55.927 **** 2025-09-18 10:22:53.665511 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665521 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:53.665532 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:53.665542 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:53.665553 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:53.665563 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:53.665574 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:53.665584 | orchestrator | 2025-09-18 10:22:53.665595 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-09-18 10:22:53.665605 | orchestrator | Thursday 18 September 2025 10:22:46 +0000 (0:00:07.803) 0:07:03.731 **** 2025-09-18 10:22:53.665616 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665626 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:53.665636 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:53.665647 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:53.665657 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:53.665667 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:53.665678 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:53.665716 | orchestrator | 2025-09-18 10:22:53.665728 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-09-18 10:22:53.665739 | orchestrator | Thursday 18 September 2025 10:22:47 +0000 (0:00:01.400) 0:07:05.131 **** 2025-09-18 10:22:53.665750 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665760 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:53.665771 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:53.665781 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:53.665791 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:53.665809 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:53.665820 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:53.665830 | orchestrator | 2025-09-18 10:22:53.665841 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-09-18 10:22:53.665852 | orchestrator | Thursday 18 September 2025 10:22:49 +0000 (0:00:01.719) 0:07:06.851 **** 2025-09-18 10:22:53.665862 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665883 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:22:53.665894 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:22:53.665905 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:22:53.665915 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:22:53.665926 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:22:53.665936 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:22:53.665947 | orchestrator | 2025-09-18 10:22:53.665958 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 10:22:53.665968 | orchestrator | Thursday 18 September 2025 10:22:51 +0000 (0:00:01.915) 0:07:08.766 **** 2025-09-18 10:22:53.665979 | orchestrator | ok: [testbed-manager] 2025-09-18 10:22:53.665989 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:22:53.666000 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:22:53.666011 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:22:53.666122 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:22:53.666144 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:22:53.666159 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:22:53.666170 | orchestrator | 2025-09-18 10:22:53.666180 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 10:22:53.666191 | orchestrator | Thursday 18 September 2025 10:22:52 +0000 (0:00:00.859) 0:07:09.626 **** 2025-09-18 10:22:53.666202 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.666212 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.666222 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.666233 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.666243 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.666254 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.666264 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.666274 | orchestrator | 2025-09-18 10:22:53.666285 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-09-18 10:22:53.666295 | orchestrator | Thursday 18 September 2025 10:22:53 +0000 (0:00:00.967) 0:07:10.593 **** 2025-09-18 10:22:53.666306 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:22:53.666316 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:22:53.666326 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:22:53.666337 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:22:53.666347 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:22:53.666358 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:22:53.666368 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:22:53.666379 | orchestrator | 2025-09-18 10:22:53.666402 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-09-18 10:23:27.618127 | orchestrator | Thursday 18 September 2025 10:22:53 +0000 (0:00:00.511) 0:07:11.105 **** 2025-09-18 10:23:27.618248 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.618265 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.618277 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.618288 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.618299 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.618310 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.618321 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.618333 | orchestrator | 2025-09-18 10:23:27.618345 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-09-18 10:23:27.618356 | orchestrator | Thursday 18 September 2025 10:22:54 +0000 (0:00:00.516) 0:07:11.621 **** 2025-09-18 10:23:27.618367 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.618378 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.618389 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.618400 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.618411 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.618422 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.618432 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.618443 | orchestrator | 2025-09-18 10:23:27.618454 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-09-18 10:23:27.618465 | orchestrator | Thursday 18 September 2025 10:22:54 +0000 (0:00:00.513) 0:07:12.134 **** 2025-09-18 10:23:27.618506 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.618517 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.618528 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.618539 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.618549 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.618560 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.618571 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.618581 | orchestrator | 2025-09-18 10:23:27.618592 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-09-18 10:23:27.618603 | orchestrator | Thursday 18 September 2025 10:22:55 +0000 (0:00:00.525) 0:07:12.660 **** 2025-09-18 10:23:27.618614 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.618625 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.618635 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.618646 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.618683 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.618693 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.618704 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.618714 | orchestrator | 2025-09-18 10:23:27.618725 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-09-18 10:23:27.618736 | orchestrator | Thursday 18 September 2025 10:23:01 +0000 (0:00:05.950) 0:07:18.610 **** 2025-09-18 10:23:27.618747 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:23:27.618759 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:23:27.618769 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:23:27.618780 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:23:27.618790 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:23:27.618801 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:23:27.618811 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:23:27.618822 | orchestrator | 2025-09-18 10:23:27.618833 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-09-18 10:23:27.618843 | orchestrator | Thursday 18 September 2025 10:23:01 +0000 (0:00:00.543) 0:07:19.153 **** 2025-09-18 10:23:27.618881 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:27.618906 | orchestrator | 2025-09-18 10:23:27.618920 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-09-18 10:23:27.618931 | orchestrator | Thursday 18 September 2025 10:23:02 +0000 (0:00:00.843) 0:07:19.997 **** 2025-09-18 10:23:27.618942 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.618952 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.618963 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.618974 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.618984 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.618995 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.619005 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.619016 | orchestrator | 2025-09-18 10:23:27.619026 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-09-18 10:23:27.619037 | orchestrator | Thursday 18 September 2025 10:23:04 +0000 (0:00:02.147) 0:07:22.145 **** 2025-09-18 10:23:27.619047 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.619058 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.619068 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.619078 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.619089 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.619099 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.619109 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.619120 | orchestrator | 2025-09-18 10:23:27.619130 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-09-18 10:23:27.619141 | orchestrator | Thursday 18 September 2025 10:23:06 +0000 (0:00:01.396) 0:07:23.542 **** 2025-09-18 10:23:27.619151 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.619162 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.619180 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.619191 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.619201 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.619212 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.619222 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.619232 | orchestrator | 2025-09-18 10:23:27.619243 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-09-18 10:23:27.619254 | orchestrator | Thursday 18 September 2025 10:23:07 +0000 (0:00:00.928) 0:07:24.470 **** 2025-09-18 10:23:27.619265 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619278 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619289 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619318 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619330 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619341 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619351 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-09-18 10:23:27.619362 | orchestrator | 2025-09-18 10:23:27.619373 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-09-18 10:23:27.619383 | orchestrator | Thursday 18 September 2025 10:23:08 +0000 (0:00:01.730) 0:07:26.201 **** 2025-09-18 10:23:27.619395 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:27.619405 | orchestrator | 2025-09-18 10:23:27.619416 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-09-18 10:23:27.619426 | orchestrator | Thursday 18 September 2025 10:23:09 +0000 (0:00:01.016) 0:07:27.217 **** 2025-09-18 10:23:27.619437 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:27.619447 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:27.619458 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:27.619468 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:27.619479 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:27.619489 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:27.619499 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:27.619510 | orchestrator | 2025-09-18 10:23:27.619520 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-09-18 10:23:27.619531 | orchestrator | Thursday 18 September 2025 10:23:19 +0000 (0:00:09.441) 0:07:36.659 **** 2025-09-18 10:23:27.619541 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.619552 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.619562 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.619572 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.619583 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.619593 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.619604 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.619614 | orchestrator | 2025-09-18 10:23:27.619625 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-09-18 10:23:27.619635 | orchestrator | Thursday 18 September 2025 10:23:21 +0000 (0:00:01.902) 0:07:38.561 **** 2025-09-18 10:23:27.619646 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.619673 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.619690 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.619701 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.619711 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.619721 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.619732 | orchestrator | 2025-09-18 10:23:27.619742 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-09-18 10:23:27.619759 | orchestrator | Thursday 18 September 2025 10:23:22 +0000 (0:00:01.319) 0:07:39.881 **** 2025-09-18 10:23:27.619769 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:27.619780 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:27.619790 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:27.619801 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:27.619811 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:27.619822 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:27.619832 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:27.619842 | orchestrator | 2025-09-18 10:23:27.619853 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-09-18 10:23:27.619863 | orchestrator | 2025-09-18 10:23:27.619874 | orchestrator | TASK [Include hardening role] ************************************************** 2025-09-18 10:23:27.619885 | orchestrator | Thursday 18 September 2025 10:23:23 +0000 (0:00:01.407) 0:07:41.288 **** 2025-09-18 10:23:27.619895 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:23:27.619906 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:23:27.619916 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:23:27.619927 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:23:27.619937 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:23:27.619948 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:23:27.619958 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:23:27.619968 | orchestrator | 2025-09-18 10:23:27.619979 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-09-18 10:23:27.619989 | orchestrator | 2025-09-18 10:23:27.620000 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-09-18 10:23:27.620011 | orchestrator | Thursday 18 September 2025 10:23:24 +0000 (0:00:00.550) 0:07:41.839 **** 2025-09-18 10:23:27.620021 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:27.620031 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:27.620042 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:27.620052 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:27.620063 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:27.620073 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:27.620083 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:27.620094 | orchestrator | 2025-09-18 10:23:27.620104 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-09-18 10:23:27.620115 | orchestrator | Thursday 18 September 2025 10:23:25 +0000 (0:00:01.514) 0:07:43.353 **** 2025-09-18 10:23:27.620125 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:27.620136 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:27.620146 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:27.620156 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:27.620167 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:27.620177 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:27.620188 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:27.620198 | orchestrator | 2025-09-18 10:23:27.620209 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-09-18 10:23:27.620225 | orchestrator | Thursday 18 September 2025 10:23:27 +0000 (0:00:01.698) 0:07:45.052 **** 2025-09-18 10:23:51.212457 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:23:51.212580 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:23:51.212596 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:23:51.212610 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:23:51.212622 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:23:51.212685 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:23:51.212697 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:23:51.212737 | orchestrator | 2025-09-18 10:23:51.212751 | orchestrator | TASK [Include smartd role] ***************************************************** 2025-09-18 10:23:51.212763 | orchestrator | Thursday 18 September 2025 10:23:28 +0000 (0:00:00.508) 0:07:45.561 **** 2025-09-18 10:23:51.212774 | orchestrator | included: osism.services.smartd for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:51.212786 | orchestrator | 2025-09-18 10:23:51.212797 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-09-18 10:23:51.212808 | orchestrator | Thursday 18 September 2025 10:23:29 +0000 (0:00:01.088) 0:07:46.649 **** 2025-09-18 10:23:51.212821 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:51.212835 | orchestrator | 2025-09-18 10:23:51.212845 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-09-18 10:23:51.212856 | orchestrator | Thursday 18 September 2025 10:23:30 +0000 (0:00:00.820) 0:07:47.470 **** 2025-09-18 10:23:51.212866 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.212877 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.212887 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.212898 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.212908 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.212919 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.212929 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.212940 | orchestrator | 2025-09-18 10:23:51.212950 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-09-18 10:23:51.212961 | orchestrator | Thursday 18 September 2025 10:23:37 +0000 (0:00:07.908) 0:07:55.378 **** 2025-09-18 10:23:51.212972 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.212983 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.212995 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213008 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213019 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213031 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213042 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213054 | orchestrator | 2025-09-18 10:23:51.213066 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-09-18 10:23:51.213078 | orchestrator | Thursday 18 September 2025 10:23:38 +0000 (0:00:00.942) 0:07:56.321 **** 2025-09-18 10:23:51.213090 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.213101 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.213113 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213124 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213136 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213148 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213160 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213171 | orchestrator | 2025-09-18 10:23:51.213184 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-09-18 10:23:51.213196 | orchestrator | Thursday 18 September 2025 10:23:40 +0000 (0:00:01.641) 0:07:57.962 **** 2025-09-18 10:23:51.213208 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.213220 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.213231 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213243 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213255 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213267 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213279 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213291 | orchestrator | 2025-09-18 10:23:51.213302 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-09-18 10:23:51.213314 | orchestrator | Thursday 18 September 2025 10:23:42 +0000 (0:00:01.781) 0:07:59.743 **** 2025-09-18 10:23:51.213326 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.213348 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.213358 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213369 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213379 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213389 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213400 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213410 | orchestrator | 2025-09-18 10:23:51.213421 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-09-18 10:23:51.213432 | orchestrator | Thursday 18 September 2025 10:23:43 +0000 (0:00:01.404) 0:08:01.148 **** 2025-09-18 10:23:51.213442 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.213453 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.213464 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213474 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213484 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213495 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213505 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213515 | orchestrator | 2025-09-18 10:23:51.213526 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-09-18 10:23:51.213537 | orchestrator | 2025-09-18 10:23:51.213547 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-09-18 10:23:51.213558 | orchestrator | Thursday 18 September 2025 10:23:45 +0000 (0:00:01.377) 0:08:02.526 **** 2025-09-18 10:23:51.213568 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:51.213579 | orchestrator | 2025-09-18 10:23:51.213590 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-18 10:23:51.213645 | orchestrator | Thursday 18 September 2025 10:23:45 +0000 (0:00:00.829) 0:08:03.355 **** 2025-09-18 10:23:51.213666 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:51.213685 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:51.213702 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:51.213720 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:51.213738 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:51.213757 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:51.213775 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:51.213794 | orchestrator | 2025-09-18 10:23:51.213806 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-18 10:23:51.213817 | orchestrator | Thursday 18 September 2025 10:23:46 +0000 (0:00:00.841) 0:08:04.196 **** 2025-09-18 10:23:51.213827 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.213838 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.213848 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.213859 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.213869 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.213880 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.213890 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.213901 | orchestrator | 2025-09-18 10:23:51.213911 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-09-18 10:23:51.213922 | orchestrator | Thursday 18 September 2025 10:23:48 +0000 (0:00:01.382) 0:08:05.579 **** 2025-09-18 10:23:51.213981 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:23:51.213993 | orchestrator | 2025-09-18 10:23:51.214003 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-09-18 10:23:51.214077 | orchestrator | Thursday 18 September 2025 10:23:48 +0000 (0:00:00.867) 0:08:06.447 **** 2025-09-18 10:23:51.214092 | orchestrator | ok: [testbed-manager] 2025-09-18 10:23:51.214103 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:23:51.214113 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:23:51.214124 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:23:51.214134 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:23:51.214155 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:23:51.214166 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:23:51.214177 | orchestrator | 2025-09-18 10:23:51.214198 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-09-18 10:23:51.214210 | orchestrator | Thursday 18 September 2025 10:23:49 +0000 (0:00:00.845) 0:08:07.292 **** 2025-09-18 10:23:51.214221 | orchestrator | changed: [testbed-manager] 2025-09-18 10:23:51.214232 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:23:51.214243 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:23:51.214254 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:23:51.214265 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:23:51.214275 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:23:51.214286 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:23:51.214296 | orchestrator | 2025-09-18 10:23:51.214307 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:23:51.214320 | orchestrator | testbed-manager : ok=163  changed=38  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-09-18 10:23:51.214331 | orchestrator | testbed-node-0 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 10:23:51.214348 | orchestrator | testbed-node-1 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 10:23:51.214359 | orchestrator | testbed-node-2 : ok=171  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 10:23:51.214370 | orchestrator | testbed-node-3 : ok=170  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-09-18 10:23:51.214381 | orchestrator | testbed-node-4 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 10:23:51.214392 | orchestrator | testbed-node-5 : ok=170  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-09-18 10:23:51.214403 | orchestrator | 2025-09-18 10:23:51.214414 | orchestrator | 2025-09-18 10:23:51.214425 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:23:51.214435 | orchestrator | Thursday 18 September 2025 10:23:51 +0000 (0:00:01.341) 0:08:08.633 **** 2025-09-18 10:23:51.214446 | orchestrator | =============================================================================== 2025-09-18 10:23:51.214457 | orchestrator | osism.commons.packages : Install required packages --------------------- 81.11s 2025-09-18 10:23:51.214468 | orchestrator | osism.commons.packages : Download required packages -------------------- 39.38s 2025-09-18 10:23:51.214479 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 34.57s 2025-09-18 10:23:51.214489 | orchestrator | osism.commons.repository : Update package cache ------------------------ 18.08s 2025-09-18 10:23:51.214500 | orchestrator | osism.services.docker : Install docker package ------------------------- 11.95s 2025-09-18 10:23:51.214511 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.32s 2025-09-18 10:23:51.214522 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 10.72s 2025-09-18 10:23:51.214534 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.71s 2025-09-18 10:23:51.214544 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.44s 2025-09-18 10:23:51.214555 | orchestrator | osism.services.docker : Install docker-cli package ---------------------- 9.22s 2025-09-18 10:23:51.214578 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.77s 2025-09-18 10:23:51.636271 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 8.01s 2025-09-18 10:23:51.636368 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.96s 2025-09-18 10:23:51.636410 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 7.91s 2025-09-18 10:23:51.636422 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.91s 2025-09-18 10:23:51.636434 | orchestrator | osism.commons.docker_compose : Install docker-compose-plugin package ---- 7.80s 2025-09-18 10:23:51.636445 | orchestrator | osism.commons.sysctl : Set sysctl parameters on rabbitmq ---------------- 6.70s 2025-09-18 10:23:51.636456 | orchestrator | osism.services.docker : Install apt-transport-https package ------------- 6.50s 2025-09-18 10:23:51.636467 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.95s 2025-09-18 10:23:51.636478 | orchestrator | osism.commons.cleanup : Remove dependencies that are no longer required --- 5.85s 2025-09-18 10:23:51.976924 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-09-18 10:23:51.977015 | orchestrator | + osism apply network 2025-09-18 10:24:04.513088 | orchestrator | 2025-09-18 10:24:04 | INFO  | Task 823860c7-d490-4f2d-b354-f0c6e23e9cf2 (network) was prepared for execution. 2025-09-18 10:24:04.513194 | orchestrator | 2025-09-18 10:24:04 | INFO  | It takes a moment until task 823860c7-d490-4f2d-b354-f0c6e23e9cf2 (network) has been started and output is visible here. 2025-09-18 10:24:32.810385 | orchestrator | 2025-09-18 10:24:32.810484 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-09-18 10:24:32.810501 | orchestrator | 2025-09-18 10:24:32.810513 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-09-18 10:24:32.810525 | orchestrator | Thursday 18 September 2025 10:24:08 +0000 (0:00:00.282) 0:00:00.282 **** 2025-09-18 10:24:32.810536 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.810547 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.810558 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.810570 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.810581 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.810636 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.810647 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.810658 | orchestrator | 2025-09-18 10:24:32.810669 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-09-18 10:24:32.810680 | orchestrator | Thursday 18 September 2025 10:24:09 +0000 (0:00:00.706) 0:00:00.989 **** 2025-09-18 10:24:32.810692 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:24:32.810705 | orchestrator | 2025-09-18 10:24:32.810716 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-09-18 10:24:32.810727 | orchestrator | Thursday 18 September 2025 10:24:10 +0000 (0:00:01.261) 0:00:02.251 **** 2025-09-18 10:24:32.810737 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.810748 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.810759 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.810769 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.810780 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.810790 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.810801 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.810812 | orchestrator | 2025-09-18 10:24:32.810823 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-09-18 10:24:32.810833 | orchestrator | Thursday 18 September 2025 10:24:12 +0000 (0:00:02.022) 0:00:04.273 **** 2025-09-18 10:24:32.810844 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.810855 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.810865 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.810876 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.810887 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.810897 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.810907 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.810918 | orchestrator | 2025-09-18 10:24:32.810929 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-09-18 10:24:32.810963 | orchestrator | Thursday 18 September 2025 10:24:14 +0000 (0:00:01.824) 0:00:06.097 **** 2025-09-18 10:24:32.810975 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-09-18 10:24:32.810986 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-09-18 10:24:32.810996 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-09-18 10:24:32.811007 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-09-18 10:24:32.811018 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-09-18 10:24:32.811028 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-09-18 10:24:32.811039 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-09-18 10:24:32.811050 | orchestrator | 2025-09-18 10:24:32.811060 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-09-18 10:24:32.811071 | orchestrator | Thursday 18 September 2025 10:24:15 +0000 (0:00:01.055) 0:00:07.153 **** 2025-09-18 10:24:32.811081 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 10:24:32.811092 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:24:32.811103 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:24:32.811113 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 10:24:32.811124 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 10:24:32.811134 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 10:24:32.811145 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 10:24:32.811155 | orchestrator | 2025-09-18 10:24:32.811166 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-09-18 10:24:32.811176 | orchestrator | Thursday 18 September 2025 10:24:19 +0000 (0:00:03.221) 0:00:10.375 **** 2025-09-18 10:24:32.811187 | orchestrator | changed: [testbed-manager] 2025-09-18 10:24:32.811198 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:24:32.811208 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:24:32.811219 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:24:32.811229 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:24:32.811239 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:24:32.811250 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:24:32.811260 | orchestrator | 2025-09-18 10:24:32.811271 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-09-18 10:24:32.811282 | orchestrator | Thursday 18 September 2025 10:24:20 +0000 (0:00:01.491) 0:00:11.867 **** 2025-09-18 10:24:32.811292 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:24:32.811303 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:24:32.811313 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 10:24:32.811324 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 10:24:32.811334 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 10:24:32.811345 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 10:24:32.811355 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 10:24:32.811366 | orchestrator | 2025-09-18 10:24:32.811376 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-09-18 10:24:32.811387 | orchestrator | Thursday 18 September 2025 10:24:22 +0000 (0:00:01.935) 0:00:13.803 **** 2025-09-18 10:24:32.811397 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.811408 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.811418 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.811429 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.811439 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.811450 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.811460 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.811471 | orchestrator | 2025-09-18 10:24:32.811481 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-09-18 10:24:32.811507 | orchestrator | Thursday 18 September 2025 10:24:23 +0000 (0:00:01.185) 0:00:14.988 **** 2025-09-18 10:24:32.811519 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:24:32.811529 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:32.811540 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:32.811558 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:32.811569 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:32.811579 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:32.811608 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:32.811620 | orchestrator | 2025-09-18 10:24:32.811631 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-09-18 10:24:32.811641 | orchestrator | Thursday 18 September 2025 10:24:24 +0000 (0:00:00.735) 0:00:15.723 **** 2025-09-18 10:24:32.811652 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.811663 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.811673 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.811684 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.811695 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.811705 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.811716 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.811726 | orchestrator | 2025-09-18 10:24:32.811737 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-09-18 10:24:32.811748 | orchestrator | Thursday 18 September 2025 10:24:26 +0000 (0:00:02.153) 0:00:17.877 **** 2025-09-18 10:24:32.811758 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:32.811769 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:32.811780 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:32.811790 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:32.811801 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:32.811823 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:32.811835 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-09-18 10:24:32.811846 | orchestrator | 2025-09-18 10:24:32.811857 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-09-18 10:24:32.811868 | orchestrator | Thursday 18 September 2025 10:24:27 +0000 (0:00:00.802) 0:00:18.679 **** 2025-09-18 10:24:32.811878 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.811889 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:24:32.811899 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:24:32.811910 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:24:32.811921 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:24:32.811931 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:24:32.811942 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:24:32.811952 | orchestrator | 2025-09-18 10:24:32.811963 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-09-18 10:24:32.811973 | orchestrator | Thursday 18 September 2025 10:24:28 +0000 (0:00:01.530) 0:00:20.210 **** 2025-09-18 10:24:32.811984 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:24:32.811996 | orchestrator | 2025-09-18 10:24:32.812007 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-18 10:24:32.812018 | orchestrator | Thursday 18 September 2025 10:24:29 +0000 (0:00:01.129) 0:00:21.340 **** 2025-09-18 10:24:32.812028 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.812039 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.812049 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.812060 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.812071 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.812081 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.812091 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.812102 | orchestrator | 2025-09-18 10:24:32.812112 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-09-18 10:24:32.812123 | orchestrator | Thursday 18 September 2025 10:24:30 +0000 (0:00:00.871) 0:00:22.211 **** 2025-09-18 10:24:32.812134 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:32.812145 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:32.812155 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:32.812172 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:32.812183 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:32.812193 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:32.812204 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:32.812214 | orchestrator | 2025-09-18 10:24:32.812225 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-18 10:24:32.812236 | orchestrator | Thursday 18 September 2025 10:24:31 +0000 (0:00:00.796) 0:00:23.008 **** 2025-09-18 10:24:32.812247 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812258 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812268 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812279 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812289 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812300 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812310 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812321 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812331 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812342 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-09-18 10:24:32.812353 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812363 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812374 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812385 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-09-18 10:24:32.812395 | orchestrator | 2025-09-18 10:24:32.812413 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-09-18 10:24:48.126897 | orchestrator | Thursday 18 September 2025 10:24:32 +0000 (0:00:01.137) 0:00:24.146 **** 2025-09-18 10:24:48.126994 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:24:48.127011 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:48.127023 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:48.127034 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:48.127044 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:48.127055 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:48.127066 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:48.127077 | orchestrator | 2025-09-18 10:24:48.127089 | orchestrator | TASK [osism.commons.network : Include vxlan interfaces] ************************ 2025-09-18 10:24:48.127099 | orchestrator | Thursday 18 September 2025 10:24:33 +0000 (0:00:00.584) 0:00:24.730 **** 2025-09-18 10:24:48.127111 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/vxlan-interfaces.yml for testbed-node-1, testbed-manager, testbed-node-0, testbed-node-5, testbed-node-2, testbed-node-4, testbed-node-3 2025-09-18 10:24:48.127124 | orchestrator | 2025-09-18 10:24:48.127135 | orchestrator | TASK [osism.commons.network : Create systemd networkd netdev files] ************ 2025-09-18 10:24:48.127146 | orchestrator | Thursday 18 September 2025 10:24:37 +0000 (0:00:04.439) 0:00:29.169 **** 2025-09-18 10:24:48.127172 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127185 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127236 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127246 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127268 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127279 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127296 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127344 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127356 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127366 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127377 | orchestrator | 2025-09-18 10:24:48.127388 | orchestrator | TASK [osism.commons.network : Create systemd networkd network files] *********** 2025-09-18 10:24:48.127399 | orchestrator | Thursday 18 September 2025 10:24:42 +0000 (0:00:04.977) 0:00:34.147 **** 2025-09-18 10:24:48.127410 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan0', 'value': {'addresses': ['192.168.112.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127433 | orchestrator | changed: [testbed-manager] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.5/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15'], 'local_ip': '192.168.16.5', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127444 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127455 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127476 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127487 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127498 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan0', 'value': {'addresses': [], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 42}}) 2025-09-18 10:24:48.127509 | orchestrator | changed: [testbed-node-1] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.11/20'], 'dests': ['192.168.16.10', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.11', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127520 | orchestrator | changed: [testbed-node-0] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.10/20'], 'dests': ['192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.10', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127531 | orchestrator | changed: [testbed-node-5] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.15/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.14', '192.168.16.5'], 'local_ip': '192.168.16.15', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127542 | orchestrator | changed: [testbed-node-3] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.13/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.13', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:48.127562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.12/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.13', '192.168.16.14', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.12', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:55.237188 | orchestrator | changed: [testbed-node-4] => (item={'key': 'vxlan1', 'value': {'addresses': ['192.168.128.14/20'], 'dests': ['192.168.16.10', '192.168.16.11', '192.168.16.12', '192.168.16.13', '192.168.16.15', '192.168.16.5'], 'local_ip': '192.168.16.14', 'mtu': 1350, 'vni': 23}}) 2025-09-18 10:24:55.237312 | orchestrator | 2025-09-18 10:24:55.237332 | orchestrator | TASK [osism.commons.network : Include networkd cleanup tasks] ****************** 2025-09-18 10:24:55.237345 | orchestrator | Thursday 18 September 2025 10:24:48 +0000 (0:00:05.307) 0:00:39.454 **** 2025-09-18 10:24:55.237413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-networkd.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:24:55.237428 | orchestrator | 2025-09-18 10:24:55.237440 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-09-18 10:24:55.237451 | orchestrator | Thursday 18 September 2025 10:24:49 +0000 (0:00:01.113) 0:00:40.568 **** 2025-09-18 10:24:55.237462 | orchestrator | ok: [testbed-manager] 2025-09-18 10:24:55.237475 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:24:55.237486 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:24:55.237496 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:24:55.237507 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:24:55.237518 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:24:55.237528 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:24:55.237539 | orchestrator | 2025-09-18 10:24:55.237551 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-09-18 10:24:55.237562 | orchestrator | Thursday 18 September 2025 10:24:51 +0000 (0:00:02.104) 0:00:42.672 **** 2025-09-18 10:24:55.237605 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237617 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237628 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237638 | orchestrator | skipping: [testbed-manager] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237649 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237660 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237670 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237681 | orchestrator | skipping: [testbed-node-0] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237692 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:24:55.237703 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237714 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237725 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237738 | orchestrator | skipping: [testbed-node-1] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237749 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:55.237761 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237773 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237785 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237797 | orchestrator | skipping: [testbed-node-2] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237810 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:55.237822 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237834 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237847 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237860 | orchestrator | skipping: [testbed-node-3] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237872 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:55.237898 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237909 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237920 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.237943 | orchestrator | skipping: [testbed-node-4] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.237954 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:55.237965 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:55.237976 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.network)  2025-09-18 10:24:55.237987 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.network)  2025-09-18 10:24:55.237998 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan1.netdev)  2025-09-18 10:24:55.238008 | orchestrator | skipping: [testbed-node-5] => (item=/etc/systemd/network/30-vxlan0.netdev)  2025-09-18 10:24:55.238067 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:55.238079 | orchestrator | 2025-09-18 10:24:55.238090 | orchestrator | RUNNING HANDLER [osism.commons.network : Reload systemd-networkd] ************** 2025-09-18 10:24:55.238120 | orchestrator | Thursday 18 September 2025 10:24:53 +0000 (0:00:02.058) 0:00:44.730 **** 2025-09-18 10:24:55.238132 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:24:55.238143 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:55.238153 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:55.238164 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:55.238175 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:55.238186 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:55.238196 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:55.238207 | orchestrator | 2025-09-18 10:24:55.238218 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-09-18 10:24:55.238228 | orchestrator | Thursday 18 September 2025 10:24:54 +0000 (0:00:00.656) 0:00:45.387 **** 2025-09-18 10:24:55.238239 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:24:55.238250 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:24:55.238261 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:24:55.238271 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:24:55.238282 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:24:55.238292 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:24:55.238303 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:24:55.238314 | orchestrator | 2025-09-18 10:24:55.238324 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:24:55.238342 | orchestrator | testbed-manager : ok=21  changed=5  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:24:55.238355 | orchestrator | testbed-node-0 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238365 | orchestrator | testbed-node-1 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238376 | orchestrator | testbed-node-2 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238387 | orchestrator | testbed-node-3 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238398 | orchestrator | testbed-node-4 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238409 | orchestrator | testbed-node-5 : ok=20  changed=5  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:24:55.238420 | orchestrator | 2025-09-18 10:24:55.238431 | orchestrator | 2025-09-18 10:24:55.238442 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:24:55.238453 | orchestrator | Thursday 18 September 2025 10:24:54 +0000 (0:00:00.790) 0:00:46.178 **** 2025-09-18 10:24:55.238463 | orchestrator | =============================================================================== 2025-09-18 10:24:55.238482 | orchestrator | osism.commons.network : Create systemd networkd network files ----------- 5.31s 2025-09-18 10:24:55.238493 | orchestrator | osism.commons.network : Create systemd networkd netdev files ------------ 4.98s 2025-09-18 10:24:55.238504 | orchestrator | osism.commons.network : Include vxlan interfaces ------------------------ 4.44s 2025-09-18 10:24:55.238514 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 3.22s 2025-09-18 10:24:55.238525 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.15s 2025-09-18 10:24:55.238536 | orchestrator | osism.commons.network : List existing configuration files --------------- 2.10s 2025-09-18 10:24:55.238546 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 2.06s 2025-09-18 10:24:55.238557 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.02s 2025-09-18 10:24:55.238587 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.94s 2025-09-18 10:24:55.238599 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.82s 2025-09-18 10:24:55.238609 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.53s 2025-09-18 10:24:55.238620 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.49s 2025-09-18 10:24:55.238631 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.26s 2025-09-18 10:24:55.238641 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.19s 2025-09-18 10:24:55.238652 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 1.14s 2025-09-18 10:24:55.238663 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.13s 2025-09-18 10:24:55.238673 | orchestrator | osism.commons.network : Include networkd cleanup tasks ------------------ 1.11s 2025-09-18 10:24:55.238684 | orchestrator | osism.commons.network : Create required directories --------------------- 1.06s 2025-09-18 10:24:55.238694 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.87s 2025-09-18 10:24:55.238705 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 0.80s 2025-09-18 10:24:55.573859 | orchestrator | + osism apply wireguard 2025-09-18 10:25:07.745300 | orchestrator | 2025-09-18 10:25:07 | INFO  | Task 4a01e0d2-688c-4000-9533-14336b1e3ecd (wireguard) was prepared for execution. 2025-09-18 10:25:07.745401 | orchestrator | 2025-09-18 10:25:07 | INFO  | It takes a moment until task 4a01e0d2-688c-4000-9533-14336b1e3ecd (wireguard) has been started and output is visible here. 2025-09-18 10:25:25.672134 | orchestrator | 2025-09-18 10:25:25.672244 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-09-18 10:25:25.672260 | orchestrator | 2025-09-18 10:25:25.672272 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-09-18 10:25:25.672284 | orchestrator | Thursday 18 September 2025 10:25:11 +0000 (0:00:00.170) 0:00:00.170 **** 2025-09-18 10:25:25.672295 | orchestrator | ok: [testbed-manager] 2025-09-18 10:25:25.672307 | orchestrator | 2025-09-18 10:25:25.672319 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-09-18 10:25:25.672330 | orchestrator | Thursday 18 September 2025 10:25:12 +0000 (0:00:01.247) 0:00:01.418 **** 2025-09-18 10:25:25.672341 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672352 | orchestrator | 2025-09-18 10:25:25.672363 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-09-18 10:25:25.672373 | orchestrator | Thursday 18 September 2025 10:25:18 +0000 (0:00:05.423) 0:00:06.842 **** 2025-09-18 10:25:25.672384 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672395 | orchestrator | 2025-09-18 10:25:25.672406 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-09-18 10:25:25.672416 | orchestrator | Thursday 18 September 2025 10:25:18 +0000 (0:00:00.449) 0:00:07.292 **** 2025-09-18 10:25:25.672444 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672480 | orchestrator | 2025-09-18 10:25:25.672491 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-09-18 10:25:25.672503 | orchestrator | Thursday 18 September 2025 10:25:19 +0000 (0:00:00.382) 0:00:07.674 **** 2025-09-18 10:25:25.672514 | orchestrator | ok: [testbed-manager] 2025-09-18 10:25:25.672524 | orchestrator | 2025-09-18 10:25:25.672535 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-09-18 10:25:25.672632 | orchestrator | Thursday 18 September 2025 10:25:19 +0000 (0:00:00.491) 0:00:08.165 **** 2025-09-18 10:25:25.672646 | orchestrator | ok: [testbed-manager] 2025-09-18 10:25:25.672657 | orchestrator | 2025-09-18 10:25:25.672668 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-09-18 10:25:25.672681 | orchestrator | Thursday 18 September 2025 10:25:20 +0000 (0:00:00.541) 0:00:08.706 **** 2025-09-18 10:25:25.672692 | orchestrator | ok: [testbed-manager] 2025-09-18 10:25:25.672704 | orchestrator | 2025-09-18 10:25:25.672716 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-09-18 10:25:25.672728 | orchestrator | Thursday 18 September 2025 10:25:20 +0000 (0:00:00.402) 0:00:09.109 **** 2025-09-18 10:25:25.672740 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672752 | orchestrator | 2025-09-18 10:25:25.672764 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-09-18 10:25:25.672777 | orchestrator | Thursday 18 September 2025 10:25:21 +0000 (0:00:01.162) 0:00:10.272 **** 2025-09-18 10:25:25.672788 | orchestrator | changed: [testbed-manager] => (item=None) 2025-09-18 10:25:25.672801 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672813 | orchestrator | 2025-09-18 10:25:25.672825 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-09-18 10:25:25.672836 | orchestrator | Thursday 18 September 2025 10:25:22 +0000 (0:00:00.970) 0:00:11.243 **** 2025-09-18 10:25:25.672848 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672860 | orchestrator | 2025-09-18 10:25:25.672872 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-09-18 10:25:25.672884 | orchestrator | Thursday 18 September 2025 10:25:24 +0000 (0:00:01.694) 0:00:12.937 **** 2025-09-18 10:25:25.672896 | orchestrator | changed: [testbed-manager] 2025-09-18 10:25:25.672908 | orchestrator | 2025-09-18 10:25:25.672920 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:25:25.672932 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:25:25.672945 | orchestrator | 2025-09-18 10:25:25.672957 | orchestrator | 2025-09-18 10:25:25.672969 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:25:25.672981 | orchestrator | Thursday 18 September 2025 10:25:25 +0000 (0:00:00.956) 0:00:13.894 **** 2025-09-18 10:25:25.672993 | orchestrator | =============================================================================== 2025-09-18 10:25:25.673004 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 5.42s 2025-09-18 10:25:25.673017 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-09-18 10:25:25.673029 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.25s 2025-09-18 10:25:25.673041 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-09-18 10:25:25.673052 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.97s 2025-09-18 10:25:25.673062 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.96s 2025-09-18 10:25:25.673073 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.54s 2025-09-18 10:25:25.673083 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.49s 2025-09-18 10:25:25.673094 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.45s 2025-09-18 10:25:25.673104 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.40s 2025-09-18 10:25:25.673124 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.38s 2025-09-18 10:25:25.980270 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-09-18 10:25:26.020912 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-09-18 10:25:26.020959 | orchestrator | Dload Upload Total Spent Left Speed 2025-09-18 10:25:26.105954 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 15 100 15 0 0 176 0 --:--:-- --:--:-- --:--:-- 178 2025-09-18 10:25:26.130664 | orchestrator | + osism apply --environment custom workarounds 2025-09-18 10:25:28.014525 | orchestrator | 2025-09-18 10:25:28 | INFO  | Trying to run play workarounds in environment custom 2025-09-18 10:25:38.229308 | orchestrator | 2025-09-18 10:25:38 | INFO  | Task d0af6b19-c4c7-4394-b819-5bca95c17b35 (workarounds) was prepared for execution. 2025-09-18 10:25:38.229418 | orchestrator | 2025-09-18 10:25:38 | INFO  | It takes a moment until task d0af6b19-c4c7-4394-b819-5bca95c17b35 (workarounds) has been started and output is visible here. 2025-09-18 10:26:01.635631 | orchestrator | 2025-09-18 10:26:01.635745 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:26:01.635762 | orchestrator | 2025-09-18 10:26:01.635774 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-09-18 10:26:01.635786 | orchestrator | Thursday 18 September 2025 10:25:41 +0000 (0:00:00.134) 0:00:00.134 **** 2025-09-18 10:26:01.635798 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635809 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635836 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635848 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635859 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635870 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635880 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-09-18 10:26:01.635891 | orchestrator | 2025-09-18 10:26:01.635902 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-09-18 10:26:01.635913 | orchestrator | 2025-09-18 10:26:01.635923 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-18 10:26:01.635934 | orchestrator | Thursday 18 September 2025 10:25:42 +0000 (0:00:00.631) 0:00:00.765 **** 2025-09-18 10:26:01.635945 | orchestrator | ok: [testbed-manager] 2025-09-18 10:26:01.635957 | orchestrator | 2025-09-18 10:26:01.635968 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-09-18 10:26:01.635978 | orchestrator | 2025-09-18 10:26:01.635989 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-09-18 10:26:01.636000 | orchestrator | Thursday 18 September 2025 10:25:44 +0000 (0:00:02.120) 0:00:02.886 **** 2025-09-18 10:26:01.636010 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:26:01.636021 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:26:01.636032 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:26:01.636043 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:26:01.636053 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:26:01.636064 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:26:01.636075 | orchestrator | 2025-09-18 10:26:01.636086 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-09-18 10:26:01.636097 | orchestrator | 2025-09-18 10:26:01.636108 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-09-18 10:26:01.636119 | orchestrator | Thursday 18 September 2025 10:25:46 +0000 (0:00:01.797) 0:00:04.684 **** 2025-09-18 10:26:01.636130 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636142 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636176 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636188 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636198 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636209 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-09-18 10:26:01.636219 | orchestrator | 2025-09-18 10:26:01.636230 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-09-18 10:26:01.636241 | orchestrator | Thursday 18 September 2025 10:25:47 +0000 (0:00:01.479) 0:00:06.163 **** 2025-09-18 10:26:01.636252 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:26:01.636262 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:26:01.636273 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:26:01.636283 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:26:01.636294 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:26:01.636305 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:26:01.636315 | orchestrator | 2025-09-18 10:26:01.636326 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-09-18 10:26:01.636337 | orchestrator | Thursday 18 September 2025 10:25:51 +0000 (0:00:03.494) 0:00:09.658 **** 2025-09-18 10:26:01.636347 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:26:01.636358 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:26:01.636368 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:26:01.636379 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:26:01.636389 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:26:01.636400 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:26:01.636411 | orchestrator | 2025-09-18 10:26:01.636421 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-09-18 10:26:01.636432 | orchestrator | 2025-09-18 10:26:01.636443 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-09-18 10:26:01.636453 | orchestrator | Thursday 18 September 2025 10:25:51 +0000 (0:00:00.694) 0:00:10.352 **** 2025-09-18 10:26:01.636464 | orchestrator | changed: [testbed-manager] 2025-09-18 10:26:01.636474 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:26:01.636485 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:26:01.636495 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:26:01.636506 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:26:01.636540 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:26:01.636551 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:26:01.636562 | orchestrator | 2025-09-18 10:26:01.636572 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-09-18 10:26:01.636583 | orchestrator | Thursday 18 September 2025 10:25:53 +0000 (0:00:01.703) 0:00:12.056 **** 2025-09-18 10:26:01.636594 | orchestrator | changed: [testbed-manager] 2025-09-18 10:26:01.636605 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:26:01.636615 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:26:01.636626 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:26:01.636636 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:26:01.636647 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:26:01.636676 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:26:01.636688 | orchestrator | 2025-09-18 10:26:01.636699 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-09-18 10:26:01.636709 | orchestrator | Thursday 18 September 2025 10:25:55 +0000 (0:00:01.677) 0:00:13.734 **** 2025-09-18 10:26:01.636720 | orchestrator | ok: [testbed-manager] 2025-09-18 10:26:01.636731 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:26:01.636741 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:26:01.636752 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:26:01.636763 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:26:01.636781 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:26:01.636792 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:26:01.636802 | orchestrator | 2025-09-18 10:26:01.636820 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-09-18 10:26:01.636832 | orchestrator | Thursday 18 September 2025 10:25:56 +0000 (0:00:01.487) 0:00:15.221 **** 2025-09-18 10:26:01.636842 | orchestrator | changed: [testbed-manager] 2025-09-18 10:26:01.636853 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:26:01.636864 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:26:01.636874 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:26:01.636885 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:26:01.636895 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:26:01.636906 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:26:01.636917 | orchestrator | 2025-09-18 10:26:01.636927 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-09-18 10:26:01.636938 | orchestrator | Thursday 18 September 2025 10:25:58 +0000 (0:00:01.799) 0:00:17.020 **** 2025-09-18 10:26:01.636949 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:26:01.636959 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:26:01.636970 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:26:01.636980 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:26:01.636991 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:26:01.637001 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:26:01.637012 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:26:01.637022 | orchestrator | 2025-09-18 10:26:01.637033 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-09-18 10:26:01.637044 | orchestrator | 2025-09-18 10:26:01.637055 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-09-18 10:26:01.637066 | orchestrator | Thursday 18 September 2025 10:25:59 +0000 (0:00:00.663) 0:00:17.683 **** 2025-09-18 10:26:01.637076 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:26:01.637087 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:26:01.637098 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:26:01.637108 | orchestrator | ok: [testbed-manager] 2025-09-18 10:26:01.637119 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:26:01.637129 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:26:01.637140 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:26:01.637150 | orchestrator | 2025-09-18 10:26:01.637161 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:26:01.637173 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:26:01.637184 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637195 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637206 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637217 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637227 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637238 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:01.637248 | orchestrator | 2025-09-18 10:26:01.637259 | orchestrator | 2025-09-18 10:26:01.637270 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:26:01.637281 | orchestrator | Thursday 18 September 2025 10:26:01 +0000 (0:00:02.315) 0:00:19.998 **** 2025-09-18 10:26:01.637302 | orchestrator | =============================================================================== 2025-09-18 10:26:01.637313 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.49s 2025-09-18 10:26:01.637323 | orchestrator | Install python3-docker -------------------------------------------------- 2.32s 2025-09-18 10:26:01.637334 | orchestrator | Apply netplan configuration --------------------------------------------- 2.12s 2025-09-18 10:26:01.637345 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.80s 2025-09-18 10:26:01.637355 | orchestrator | Apply netplan configuration --------------------------------------------- 1.80s 2025-09-18 10:26:01.637366 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.70s 2025-09-18 10:26:01.637376 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.68s 2025-09-18 10:26:01.637387 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.49s 2025-09-18 10:26:01.637397 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.48s 2025-09-18 10:26:01.637408 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.69s 2025-09-18 10:26:01.637419 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.66s 2025-09-18 10:26:01.637435 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.63s 2025-09-18 10:26:02.299016 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-09-18 10:26:14.320806 | orchestrator | 2025-09-18 10:26:14 | INFO  | Task dc1d8240-02b3-4918-b3b5-8f4a61a1fb63 (reboot) was prepared for execution. 2025-09-18 10:26:14.320977 | orchestrator | 2025-09-18 10:26:14 | INFO  | It takes a moment until task dc1d8240-02b3-4918-b3b5-8f4a61a1fb63 (reboot) has been started and output is visible here. 2025-09-18 10:26:24.047734 | orchestrator | 2025-09-18 10:26:24.047867 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.047883 | orchestrator | 2025-09-18 10:26:24.047894 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.047905 | orchestrator | Thursday 18 September 2025 10:26:18 +0000 (0:00:00.220) 0:00:00.220 **** 2025-09-18 10:26:24.047915 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:26:24.047925 | orchestrator | 2025-09-18 10:26:24.047935 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.047945 | orchestrator | Thursday 18 September 2025 10:26:18 +0000 (0:00:00.099) 0:00:00.319 **** 2025-09-18 10:26:24.047955 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:26:24.047965 | orchestrator | 2025-09-18 10:26:24.047974 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.047984 | orchestrator | Thursday 18 September 2025 10:26:19 +0000 (0:00:00.916) 0:00:01.236 **** 2025-09-18 10:26:24.047993 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:26:24.048003 | orchestrator | 2025-09-18 10:26:24.048013 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.048023 | orchestrator | 2025-09-18 10:26:24.048032 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.048042 | orchestrator | Thursday 18 September 2025 10:26:19 +0000 (0:00:00.103) 0:00:01.339 **** 2025-09-18 10:26:24.048051 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:26:24.048061 | orchestrator | 2025-09-18 10:26:24.048072 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.048083 | orchestrator | Thursday 18 September 2025 10:26:19 +0000 (0:00:00.090) 0:00:01.429 **** 2025-09-18 10:26:24.048094 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:26:24.048104 | orchestrator | 2025-09-18 10:26:24.048115 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.048126 | orchestrator | Thursday 18 September 2025 10:26:20 +0000 (0:00:00.634) 0:00:02.064 **** 2025-09-18 10:26:24.048136 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:26:24.048177 | orchestrator | 2025-09-18 10:26:24.048189 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.048199 | orchestrator | 2025-09-18 10:26:24.048210 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.048221 | orchestrator | Thursday 18 September 2025 10:26:20 +0000 (0:00:00.107) 0:00:02.171 **** 2025-09-18 10:26:24.048232 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:26:24.048242 | orchestrator | 2025-09-18 10:26:24.048253 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.048264 | orchestrator | Thursday 18 September 2025 10:26:20 +0000 (0:00:00.174) 0:00:02.346 **** 2025-09-18 10:26:24.048274 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:26:24.048285 | orchestrator | 2025-09-18 10:26:24.048296 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.048306 | orchestrator | Thursday 18 September 2025 10:26:21 +0000 (0:00:00.624) 0:00:02.971 **** 2025-09-18 10:26:24.048317 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:26:24.048328 | orchestrator | 2025-09-18 10:26:24.048338 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.048349 | orchestrator | 2025-09-18 10:26:24.048360 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.048370 | orchestrator | Thursday 18 September 2025 10:26:21 +0000 (0:00:00.122) 0:00:03.094 **** 2025-09-18 10:26:24.048382 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:26:24.048393 | orchestrator | 2025-09-18 10:26:24.048404 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.048414 | orchestrator | Thursday 18 September 2025 10:26:21 +0000 (0:00:00.115) 0:00:03.209 **** 2025-09-18 10:26:24.048425 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:26:24.048436 | orchestrator | 2025-09-18 10:26:24.048446 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.048457 | orchestrator | Thursday 18 September 2025 10:26:22 +0000 (0:00:00.656) 0:00:03.865 **** 2025-09-18 10:26:24.048467 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:26:24.048478 | orchestrator | 2025-09-18 10:26:24.048489 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.048522 | orchestrator | 2025-09-18 10:26:24.048534 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.048545 | orchestrator | Thursday 18 September 2025 10:26:22 +0000 (0:00:00.103) 0:00:03.968 **** 2025-09-18 10:26:24.048556 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:26:24.048567 | orchestrator | 2025-09-18 10:26:24.048577 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.048588 | orchestrator | Thursday 18 September 2025 10:26:22 +0000 (0:00:00.102) 0:00:04.071 **** 2025-09-18 10:26:24.048599 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:26:24.048609 | orchestrator | 2025-09-18 10:26:24.048620 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.048631 | orchestrator | Thursday 18 September 2025 10:26:22 +0000 (0:00:00.635) 0:00:04.706 **** 2025-09-18 10:26:24.048641 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:26:24.048652 | orchestrator | 2025-09-18 10:26:24.048663 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-09-18 10:26:24.048673 | orchestrator | 2025-09-18 10:26:24.048684 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-09-18 10:26:24.048695 | orchestrator | Thursday 18 September 2025 10:26:22 +0000 (0:00:00.111) 0:00:04.818 **** 2025-09-18 10:26:24.048705 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:26:24.048716 | orchestrator | 2025-09-18 10:26:24.048726 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-09-18 10:26:24.048737 | orchestrator | Thursday 18 September 2025 10:26:23 +0000 (0:00:00.092) 0:00:04.910 **** 2025-09-18 10:26:24.048748 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:26:24.048759 | orchestrator | 2025-09-18 10:26:24.048769 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-09-18 10:26:24.048790 | orchestrator | Thursday 18 September 2025 10:26:23 +0000 (0:00:00.637) 0:00:05.547 **** 2025-09-18 10:26:24.048842 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:26:24.048855 | orchestrator | 2025-09-18 10:26:24.048866 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:26:24.048878 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048891 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048902 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048913 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048924 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048934 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:26:24.048945 | orchestrator | 2025-09-18 10:26:24.048956 | orchestrator | 2025-09-18 10:26:24.048967 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:26:24.048978 | orchestrator | Thursday 18 September 2025 10:26:23 +0000 (0:00:00.035) 0:00:05.583 **** 2025-09-18 10:26:24.048989 | orchestrator | =============================================================================== 2025-09-18 10:26:24.048999 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.11s 2025-09-18 10:26:24.049016 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.67s 2025-09-18 10:26:24.049027 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.58s 2025-09-18 10:26:24.345086 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-09-18 10:26:36.334465 | orchestrator | 2025-09-18 10:26:36 | INFO  | Task ca01e877-5e7f-4a43-bf54-9bce059273f8 (wait-for-connection) was prepared for execution. 2025-09-18 10:26:36.334628 | orchestrator | 2025-09-18 10:26:36 | INFO  | It takes a moment until task ca01e877-5e7f-4a43-bf54-9bce059273f8 (wait-for-connection) has been started and output is visible here. 2025-09-18 10:26:52.287354 | orchestrator | 2025-09-18 10:26:52.287587 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-09-18 10:26:52.287618 | orchestrator | 2025-09-18 10:26:52.287638 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-09-18 10:26:52.287657 | orchestrator | Thursday 18 September 2025 10:26:40 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-18 10:26:52.287676 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:26:52.287695 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:26:52.287712 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:26:52.287730 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:26:52.287747 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:26:52.287764 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:26:52.287782 | orchestrator | 2025-09-18 10:26:52.287801 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:26:52.287821 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287841 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287859 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287931 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287951 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287967 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:26:52.287984 | orchestrator | 2025-09-18 10:26:52.288001 | orchestrator | 2025-09-18 10:26:52.288019 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:26:52.288036 | orchestrator | Thursday 18 September 2025 10:26:51 +0000 (0:00:11.566) 0:00:11.805 **** 2025-09-18 10:26:52.288052 | orchestrator | =============================================================================== 2025-09-18 10:26:52.288069 | orchestrator | Wait until remote system is reachable ---------------------------------- 11.57s 2025-09-18 10:26:52.577523 | orchestrator | + osism apply hddtemp 2025-09-18 10:27:04.676045 | orchestrator | 2025-09-18 10:27:04 | INFO  | Task fcfe5854-59e5-4db2-8bc1-3e2bec361214 (hddtemp) was prepared for execution. 2025-09-18 10:27:04.676213 | orchestrator | 2025-09-18 10:27:04 | INFO  | It takes a moment until task fcfe5854-59e5-4db2-8bc1-3e2bec361214 (hddtemp) has been started and output is visible here. 2025-09-18 10:27:33.542248 | orchestrator | 2025-09-18 10:27:33.542360 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-09-18 10:27:33.542376 | orchestrator | 2025-09-18 10:27:33.542388 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-09-18 10:27:33.542399 | orchestrator | Thursday 18 September 2025 10:27:08 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-18 10:27:33.542410 | orchestrator | ok: [testbed-manager] 2025-09-18 10:27:33.542423 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:27:33.542434 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:27:33.542486 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:27:33.542499 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:27:33.542509 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:27:33.542520 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:27:33.542531 | orchestrator | 2025-09-18 10:27:33.542542 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-09-18 10:27:33.542553 | orchestrator | Thursday 18 September 2025 10:27:09 +0000 (0:00:00.690) 0:00:00.948 **** 2025-09-18 10:27:33.542587 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:27:33.542601 | orchestrator | 2025-09-18 10:27:33.542612 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-09-18 10:27:33.542623 | orchestrator | Thursday 18 September 2025 10:27:10 +0000 (0:00:01.219) 0:00:02.168 **** 2025-09-18 10:27:33.542634 | orchestrator | ok: [testbed-manager] 2025-09-18 10:27:33.542645 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:27:33.542656 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:27:33.542666 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:27:33.542677 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:27:33.542687 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:27:33.542698 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:27:33.542708 | orchestrator | 2025-09-18 10:27:33.542719 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-09-18 10:27:33.542730 | orchestrator | Thursday 18 September 2025 10:27:12 +0000 (0:00:02.091) 0:00:04.259 **** 2025-09-18 10:27:33.542741 | orchestrator | changed: [testbed-manager] 2025-09-18 10:27:33.542752 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:27:33.542763 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:27:33.542774 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:27:33.542786 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:27:33.542827 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:27:33.542841 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:27:33.542853 | orchestrator | 2025-09-18 10:27:33.542865 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-09-18 10:27:33.542878 | orchestrator | Thursday 18 September 2025 10:27:13 +0000 (0:00:01.221) 0:00:05.481 **** 2025-09-18 10:27:33.542889 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:27:33.542901 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:27:33.542913 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:27:33.542925 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:27:33.542937 | orchestrator | ok: [testbed-manager] 2025-09-18 10:27:33.542949 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:27:33.542961 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:27:33.542973 | orchestrator | 2025-09-18 10:27:33.542985 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-09-18 10:27:33.542997 | orchestrator | Thursday 18 September 2025 10:27:15 +0000 (0:00:01.177) 0:00:06.658 **** 2025-09-18 10:27:33.543009 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:27:33.543021 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:27:33.543034 | orchestrator | changed: [testbed-manager] 2025-09-18 10:27:33.543047 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:27:33.543059 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:27:33.543071 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:27:33.543082 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:27:33.543092 | orchestrator | 2025-09-18 10:27:33.543103 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-09-18 10:27:33.543114 | orchestrator | Thursday 18 September 2025 10:27:15 +0000 (0:00:00.895) 0:00:07.554 **** 2025-09-18 10:27:33.543124 | orchestrator | changed: [testbed-manager] 2025-09-18 10:27:33.543135 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:27:33.543145 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:27:33.543156 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:27:33.543166 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:27:33.543177 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:27:33.543187 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:27:33.543198 | orchestrator | 2025-09-18 10:27:33.543209 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-09-18 10:27:33.543219 | orchestrator | Thursday 18 September 2025 10:27:29 +0000 (0:00:13.893) 0:00:21.447 **** 2025-09-18 10:27:33.543230 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:27:33.543242 | orchestrator | 2025-09-18 10:27:33.543253 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-09-18 10:27:33.543264 | orchestrator | Thursday 18 September 2025 10:27:31 +0000 (0:00:01.416) 0:00:22.864 **** 2025-09-18 10:27:33.543274 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:27:33.543285 | orchestrator | changed: [testbed-manager] 2025-09-18 10:27:33.543295 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:27:33.543306 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:27:33.543316 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:27:33.543327 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:27:33.543337 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:27:33.543348 | orchestrator | 2025-09-18 10:27:33.543358 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:27:33.543369 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:27:33.543399 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543416 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543435 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543466 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543478 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543489 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:27:33.543500 | orchestrator | 2025-09-18 10:27:33.543511 | orchestrator | 2025-09-18 10:27:33.543521 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:27:33.543532 | orchestrator | Thursday 18 September 2025 10:27:33 +0000 (0:00:01.948) 0:00:24.813 **** 2025-09-18 10:27:33.543543 | orchestrator | =============================================================================== 2025-09-18 10:27:33.543554 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.89s 2025-09-18 10:27:33.543564 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.09s 2025-09-18 10:27:33.543575 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.95s 2025-09-18 10:27:33.543586 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.42s 2025-09-18 10:27:33.543596 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.22s 2025-09-18 10:27:33.543607 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.22s 2025-09-18 10:27:33.543617 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.18s 2025-09-18 10:27:33.543628 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.90s 2025-09-18 10:27:33.543639 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.69s 2025-09-18 10:27:33.852038 | orchestrator | ++ semver latest 7.1.1 2025-09-18 10:27:33.905879 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 10:27:33.906217 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 10:27:33.906237 | orchestrator | + sudo systemctl restart manager.service 2025-09-18 10:28:23.330285 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-09-18 10:28:23.330395 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-09-18 10:28:23.330432 | orchestrator | + local max_attempts=60 2025-09-18 10:28:23.330446 | orchestrator | + local name=ceph-ansible 2025-09-18 10:28:23.330458 | orchestrator | + local attempt_num=1 2025-09-18 10:28:23.330607 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:23.368360 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:23.368433 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:23.368447 | orchestrator | + sleep 5 2025-09-18 10:28:28.374917 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:28.407197 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:28.407242 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:28.407254 | orchestrator | + sleep 5 2025-09-18 10:28:33.410883 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:33.450343 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:33.450382 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:33.451307 | orchestrator | + sleep 5 2025-09-18 10:28:38.456053 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:38.492876 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:38.492966 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:38.492989 | orchestrator | + sleep 5 2025-09-18 10:28:43.496669 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:43.530325 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:43.530372 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:43.530447 | orchestrator | + sleep 5 2025-09-18 10:28:48.533854 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:48.570840 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:48.570911 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:48.570925 | orchestrator | + sleep 5 2025-09-18 10:28:53.574984 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:53.622760 | orchestrator | + [[ unhealthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:53.622808 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:53.622817 | orchestrator | + sleep 5 2025-09-18 10:28:58.629156 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:28:58.743039 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:28:58.743128 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:28:58.743145 | orchestrator | + sleep 5 2025-09-18 10:29:03.745923 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:03.782613 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:03.782668 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:29:03.782682 | orchestrator | + sleep 5 2025-09-18 10:29:08.785822 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:08.828523 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:08.828570 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:29:08.828584 | orchestrator | + sleep 5 2025-09-18 10:29:13.833549 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:13.877033 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:13.877085 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:29:13.877098 | orchestrator | + sleep 5 2025-09-18 10:29:18.882699 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:18.925504 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:18.925562 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:29:18.925576 | orchestrator | + sleep 5 2025-09-18 10:29:23.930230 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:23.976889 | orchestrator | + [[ starting == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:23.976935 | orchestrator | + (( attempt_num++ == max_attempts )) 2025-09-18 10:29:23.976948 | orchestrator | + sleep 5 2025-09-18 10:29:28.982791 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-09-18 10:29:29.023791 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:29.023869 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-09-18 10:29:29.023886 | orchestrator | + local max_attempts=60 2025-09-18 10:29:29.023902 | orchestrator | + local name=kolla-ansible 2025-09-18 10:29:29.023914 | orchestrator | + local attempt_num=1 2025-09-18 10:29:29.025034 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-09-18 10:29:29.065780 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:29.065823 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-09-18 10:29:29.065832 | orchestrator | + local max_attempts=60 2025-09-18 10:29:29.065840 | orchestrator | + local name=osism-ansible 2025-09-18 10:29:29.065848 | orchestrator | + local attempt_num=1 2025-09-18 10:29:29.065855 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-09-18 10:29:29.100687 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-09-18 10:29:29.100748 | orchestrator | + [[ true == \t\r\u\e ]] 2025-09-18 10:29:29.100763 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-09-18 10:29:29.279549 | orchestrator | ARA in ceph-ansible already disabled. 2025-09-18 10:29:29.450895 | orchestrator | ARA in kolla-ansible already disabled. 2025-09-18 10:29:29.812659 | orchestrator | + osism apply gather-facts 2025-09-18 10:29:49.052102 | orchestrator | 2025-09-18 10:29:49 | INFO  | Task fdbe672b-9fc9-4a67-b188-b13f23966b6e (gather-facts) was prepared for execution. 2025-09-18 10:29:49.052169 | orchestrator | 2025-09-18 10:29:49 | INFO  | It takes a moment until task fdbe672b-9fc9-4a67-b188-b13f23966b6e (gather-facts) has been started and output is visible here. 2025-09-18 10:30:02.796429 | orchestrator | 2025-09-18 10:30:02.796560 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 10:30:02.796578 | orchestrator | 2025-09-18 10:30:02.796590 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:30:02.796639 | orchestrator | Thursday 18 September 2025 10:29:53 +0000 (0:00:00.230) 0:00:00.230 **** 2025-09-18 10:30:02.796652 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:30:02.796663 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:30:02.796674 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:30:02.796685 | orchestrator | ok: [testbed-manager] 2025-09-18 10:30:02.796695 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:30:02.796705 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:30:02.796716 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:30:02.796726 | orchestrator | 2025-09-18 10:30:02.796737 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 10:30:02.796748 | orchestrator | 2025-09-18 10:30:02.796759 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 10:30:02.796769 | orchestrator | Thursday 18 September 2025 10:30:01 +0000 (0:00:08.715) 0:00:08.945 **** 2025-09-18 10:30:02.796780 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:30:02.796791 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:30:02.796802 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:30:02.796813 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:30:02.796823 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:30:02.796833 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:30:02.796844 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:30:02.796854 | orchestrator | 2025-09-18 10:30:02.796865 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:30:02.796876 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796888 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796898 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796909 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796919 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796931 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796944 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:30:02.796956 | orchestrator | 2025-09-18 10:30:02.796968 | orchestrator | 2025-09-18 10:30:02.796981 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:30:02.796995 | orchestrator | Thursday 18 September 2025 10:30:02 +0000 (0:00:00.526) 0:00:09.472 **** 2025-09-18 10:30:02.797007 | orchestrator | =============================================================================== 2025-09-18 10:30:02.797020 | orchestrator | Gathers facts about hosts ----------------------------------------------- 8.72s 2025-09-18 10:30:02.797032 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.53s 2025-09-18 10:30:03.147486 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-09-18 10:30:03.159809 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-09-18 10:30:03.171426 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-09-18 10:30:03.183198 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-09-18 10:30:03.201148 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-09-18 10:30:03.221511 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-09-18 10:30:03.239932 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-09-18 10:30:03.259522 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-09-18 10:30:03.279117 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-09-18 10:30:03.298012 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-09-18 10:30:03.321162 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-09-18 10:30:03.335404 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-09-18 10:30:03.347095 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-09-18 10:30:03.366747 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-09-18 10:30:03.381208 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-09-18 10:30:03.399807 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-09-18 10:30:03.415771 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-09-18 10:30:03.428735 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-09-18 10:30:03.439785 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-09-18 10:30:03.453225 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-09-18 10:30:03.465052 | orchestrator | + [[ false == \t\r\u\e ]] 2025-09-18 10:30:03.619334 | orchestrator | ok: Runtime: 0:23:50.639273 2025-09-18 10:30:03.730617 | 2025-09-18 10:30:03.730759 | TASK [Deploy services] 2025-09-18 10:30:04.262227 | orchestrator | skipping: Conditional result was False 2025-09-18 10:30:04.282738 | 2025-09-18 10:30:04.282978 | TASK [Deploy in a nutshell] 2025-09-18 10:30:04.973591 | orchestrator | + set -e 2025-09-18 10:30:04.973808 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-09-18 10:30:04.973835 | orchestrator | ++ export INTERACTIVE=false 2025-09-18 10:30:04.973880 | orchestrator | ++ INTERACTIVE=false 2025-09-18 10:30:04.973893 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-09-18 10:30:04.973906 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-09-18 10:30:04.973920 | orchestrator | + source /opt/manager-vars.sh 2025-09-18 10:30:04.973966 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-09-18 10:30:04.973995 | orchestrator | ++ NUMBER_OF_NODES=6 2025-09-18 10:30:04.974059 | orchestrator | ++ export CEPH_VERSION=reef 2025-09-18 10:30:04.974079 | orchestrator | ++ CEPH_VERSION=reef 2025-09-18 10:30:04.974092 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-09-18 10:30:04.974111 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-09-18 10:30:04.974122 | orchestrator | ++ export MANAGER_VERSION=latest 2025-09-18 10:30:04.974144 | orchestrator | ++ MANAGER_VERSION=latest 2025-09-18 10:30:04.974155 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-09-18 10:30:04.974170 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-09-18 10:30:04.974181 | orchestrator | ++ export ARA=false 2025-09-18 10:30:04.974192 | orchestrator | ++ ARA=false 2025-09-18 10:30:04.974204 | orchestrator | ++ export DEPLOY_MODE=manager 2025-09-18 10:30:04.974216 | orchestrator | ++ DEPLOY_MODE=manager 2025-09-18 10:30:04.974227 | orchestrator | ++ export TEMPEST=false 2025-09-18 10:30:04.974237 | orchestrator | ++ TEMPEST=false 2025-09-18 10:30:04.974248 | orchestrator | ++ export IS_ZUUL=true 2025-09-18 10:30:04.974259 | orchestrator | ++ IS_ZUUL=true 2025-09-18 10:30:04.974270 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:30:04.974282 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.192.190 2025-09-18 10:30:04.974293 | orchestrator | ++ export EXTERNAL_API=false 2025-09-18 10:30:04.974304 | orchestrator | ++ EXTERNAL_API=false 2025-09-18 10:30:04.974338 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-09-18 10:30:04.974350 | orchestrator | ++ IMAGE_USER=ubuntu 2025-09-18 10:30:04.974361 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-09-18 10:30:04.974372 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-09-18 10:30:04.974384 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-09-18 10:30:04.974402 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-09-18 10:30:04.974413 | orchestrator | + echo 2025-09-18 10:30:04.974425 | orchestrator | 2025-09-18 10:30:04.974437 | orchestrator | # PULL IMAGES 2025-09-18 10:30:04.974448 | orchestrator | 2025-09-18 10:30:04.974459 | orchestrator | + echo '# PULL IMAGES' 2025-09-18 10:30:04.974470 | orchestrator | + echo 2025-09-18 10:30:04.975633 | orchestrator | ++ semver latest 7.0.0 2025-09-18 10:30:05.043536 | orchestrator | + [[ -1 -ge 0 ]] 2025-09-18 10:30:05.043589 | orchestrator | + [[ latest == \l\a\t\e\s\t ]] 2025-09-18 10:30:05.043597 | orchestrator | + osism apply --no-wait -r 2 -e custom pull-images 2025-09-18 10:30:07.069706 | orchestrator | 2025-09-18 10:30:07 | INFO  | Trying to run play pull-images in environment custom 2025-09-18 10:30:17.158682 | orchestrator | 2025-09-18 10:30:17 | INFO  | Task 2d1c655e-e662-49f5-b4a4-374f59b7b2e5 (pull-images) was prepared for execution. 2025-09-18 10:30:17.158795 | orchestrator | 2025-09-18 10:30:17 | INFO  | Task 2d1c655e-e662-49f5-b4a4-374f59b7b2e5 is running in background. No more output. Check ARA for logs. 2025-09-18 10:30:19.479649 | orchestrator | 2025-09-18 10:30:19 | INFO  | Trying to run play wipe-partitions in environment custom 2025-09-18 10:30:29.716944 | orchestrator | 2025-09-18 10:30:29 | INFO  | Task cdcd955e-bc87-464f-9661-dc5ab2160245 (wipe-partitions) was prepared for execution. 2025-09-18 10:30:29.717092 | orchestrator | 2025-09-18 10:30:29 | INFO  | It takes a moment until task cdcd955e-bc87-464f-9661-dc5ab2160245 (wipe-partitions) has been started and output is visible here. 2025-09-18 10:30:41.816536 | orchestrator | 2025-09-18 10:30:41.816634 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-09-18 10:30:41.816650 | orchestrator | 2025-09-18 10:30:41.816661 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-09-18 10:30:41.816676 | orchestrator | Thursday 18 September 2025 10:30:33 +0000 (0:00:00.169) 0:00:00.169 **** 2025-09-18 10:30:41.816689 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:30:41.816701 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:30:41.816712 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:30:41.816724 | orchestrator | 2025-09-18 10:30:41.816735 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-09-18 10:30:41.816768 | orchestrator | Thursday 18 September 2025 10:30:34 +0000 (0:00:00.566) 0:00:00.735 **** 2025-09-18 10:30:41.816780 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:30:41.816792 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:30:41.816807 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:30:41.816818 | orchestrator | 2025-09-18 10:30:41.816829 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-09-18 10:30:41.816841 | orchestrator | Thursday 18 September 2025 10:30:34 +0000 (0:00:00.253) 0:00:00.989 **** 2025-09-18 10:30:41.816851 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:30:41.816863 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:30:41.816873 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:30:41.816884 | orchestrator | 2025-09-18 10:30:41.816895 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-09-18 10:30:41.816906 | orchestrator | Thursday 18 September 2025 10:30:35 +0000 (0:00:00.700) 0:00:01.689 **** 2025-09-18 10:30:41.816917 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:30:41.816928 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:30:41.816939 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:30:41.816949 | orchestrator | 2025-09-18 10:30:41.816960 | orchestrator | TASK [Check device availability] *********************************************** 2025-09-18 10:30:41.816971 | orchestrator | Thursday 18 September 2025 10:30:35 +0000 (0:00:00.277) 0:00:01.966 **** 2025-09-18 10:30:41.816982 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 10:30:41.816997 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 10:30:41.817008 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 10:30:41.817019 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 10:30:41.817030 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 10:30:41.817041 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 10:30:41.817051 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 10:30:41.817062 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 10:30:41.817075 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 10:30:41.817087 | orchestrator | 2025-09-18 10:30:41.817099 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-09-18 10:30:41.817112 | orchestrator | Thursday 18 September 2025 10:30:36 +0000 (0:00:01.123) 0:00:03.090 **** 2025-09-18 10:30:41.817124 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 10:30:41.817137 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 10:30:41.817148 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 10:30:41.817161 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 10:30:41.817173 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 10:30:41.817184 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 10:30:41.817197 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 10:30:41.817215 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 10:30:41.817233 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 10:30:41.817253 | orchestrator | 2025-09-18 10:30:41.817271 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-09-18 10:30:41.817322 | orchestrator | Thursday 18 September 2025 10:30:37 +0000 (0:00:01.274) 0:00:04.365 **** 2025-09-18 10:30:41.817341 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-09-18 10:30:41.817361 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-09-18 10:30:41.817380 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-09-18 10:30:41.817395 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-09-18 10:30:41.817408 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-09-18 10:30:41.817428 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-09-18 10:30:41.817439 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-09-18 10:30:41.817460 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-09-18 10:30:41.817471 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-09-18 10:30:41.817482 | orchestrator | 2025-09-18 10:30:41.817493 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-09-18 10:30:41.817503 | orchestrator | Thursday 18 September 2025 10:30:40 +0000 (0:00:02.280) 0:00:06.645 **** 2025-09-18 10:30:41.817514 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:30:41.817525 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:30:41.817535 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:30:41.817546 | orchestrator | 2025-09-18 10:30:41.817557 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-09-18 10:30:41.817567 | orchestrator | Thursday 18 September 2025 10:30:40 +0000 (0:00:00.585) 0:00:07.231 **** 2025-09-18 10:30:41.817578 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:30:41.817589 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:30:41.817600 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:30:41.817610 | orchestrator | 2025-09-18 10:30:41.817621 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:30:41.817634 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:30:41.817645 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:30:41.817674 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:30:41.817685 | orchestrator | 2025-09-18 10:30:41.817696 | orchestrator | 2025-09-18 10:30:41.817707 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:30:41.817718 | orchestrator | Thursday 18 September 2025 10:30:41 +0000 (0:00:00.605) 0:00:07.836 **** 2025-09-18 10:30:41.817729 | orchestrator | =============================================================================== 2025-09-18 10:30:41.817740 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.28s 2025-09-18 10:30:41.817750 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.27s 2025-09-18 10:30:41.817761 | orchestrator | Check device availability ----------------------------------------------- 1.12s 2025-09-18 10:30:41.817772 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.70s 2025-09-18 10:30:41.817783 | orchestrator | Request device events from the kernel ----------------------------------- 0.61s 2025-09-18 10:30:41.817793 | orchestrator | Reload udev rules ------------------------------------------------------- 0.59s 2025-09-18 10:30:41.817804 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.57s 2025-09-18 10:30:41.817815 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-09-18 10:30:41.817826 | orchestrator | Remove all rook related logical devices --------------------------------- 0.25s 2025-09-18 10:30:54.143400 | orchestrator | 2025-09-18 10:30:54 | INFO  | Task caec4141-ad38-42f1-9787-a6160638d723 (facts) was prepared for execution. 2025-09-18 10:30:54.143498 | orchestrator | 2025-09-18 10:30:54 | INFO  | It takes a moment until task caec4141-ad38-42f1-9787-a6160638d723 (facts) has been started and output is visible here. 2025-09-18 10:31:06.098796 | orchestrator | 2025-09-18 10:31:06.098900 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-18 10:31:06.098916 | orchestrator | 2025-09-18 10:31:06.098928 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 10:31:06.098940 | orchestrator | Thursday 18 September 2025 10:30:58 +0000 (0:00:00.247) 0:00:00.247 **** 2025-09-18 10:31:06.098952 | orchestrator | ok: [testbed-manager] 2025-09-18 10:31:06.098963 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:31:06.098974 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:31:06.099006 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:31:06.099018 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:06.099029 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:06.099040 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:06.099051 | orchestrator | 2025-09-18 10:31:06.099063 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 10:31:06.099074 | orchestrator | Thursday 18 September 2025 10:30:59 +0000 (0:00:01.024) 0:00:01.272 **** 2025-09-18 10:31:06.099086 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:31:06.099097 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:31:06.099108 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:31:06.099119 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:31:06.099130 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:06.099141 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:06.099152 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:06.099163 | orchestrator | 2025-09-18 10:31:06.099174 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 10:31:06.099185 | orchestrator | 2025-09-18 10:31:06.099195 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:31:06.099207 | orchestrator | Thursday 18 September 2025 10:31:00 +0000 (0:00:01.104) 0:00:02.376 **** 2025-09-18 10:31:06.099218 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:31:06.099229 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:31:06.099241 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:31:06.099252 | orchestrator | ok: [testbed-manager] 2025-09-18 10:31:06.099263 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:06.099307 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:06.099318 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:06.099330 | orchestrator | 2025-09-18 10:31:06.099341 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 10:31:06.099352 | orchestrator | 2025-09-18 10:31:06.099363 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 10:31:06.099390 | orchestrator | Thursday 18 September 2025 10:31:04 +0000 (0:00:04.605) 0:00:06.982 **** 2025-09-18 10:31:06.099402 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:31:06.099413 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:31:06.099425 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:31:06.099436 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:31:06.099447 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:06.099458 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:06.099469 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:06.099480 | orchestrator | 2025-09-18 10:31:06.099491 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:31:06.099502 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099514 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099526 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099537 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099548 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099559 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099570 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:31:06.099581 | orchestrator | 2025-09-18 10:31:06.099600 | orchestrator | 2025-09-18 10:31:06.099611 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:31:06.099622 | orchestrator | Thursday 18 September 2025 10:31:05 +0000 (0:00:00.733) 0:00:07.715 **** 2025-09-18 10:31:06.099633 | orchestrator | =============================================================================== 2025-09-18 10:31:06.099644 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.61s 2025-09-18 10:31:06.099656 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.10s 2025-09-18 10:31:06.099667 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.02s 2025-09-18 10:31:06.099678 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.73s 2025-09-18 10:31:08.496941 | orchestrator | 2025-09-18 10:31:08 | INFO  | Task 5cc0a3ba-d95e-407f-9823-e89be785a185 (ceph-configure-lvm-volumes) was prepared for execution. 2025-09-18 10:31:08.497027 | orchestrator | 2025-09-18 10:31:08 | INFO  | It takes a moment until task 5cc0a3ba-d95e-407f-9823-e89be785a185 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-09-18 10:31:20.583213 | orchestrator | 2025-09-18 10:31:20.583374 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 10:31:20.583392 | orchestrator | 2025-09-18 10:31:20.583404 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:31:20.583419 | orchestrator | Thursday 18 September 2025 10:31:12 +0000 (0:00:00.329) 0:00:00.329 **** 2025-09-18 10:31:20.583431 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:20.583443 | orchestrator | 2025-09-18 10:31:20.583454 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:31:20.583465 | orchestrator | Thursday 18 September 2025 10:31:12 +0000 (0:00:00.274) 0:00:00.603 **** 2025-09-18 10:31:20.583477 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:20.583489 | orchestrator | 2025-09-18 10:31:20.583500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583511 | orchestrator | Thursday 18 September 2025 10:31:13 +0000 (0:00:00.242) 0:00:00.845 **** 2025-09-18 10:31:20.583522 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-18 10:31:20.583534 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-18 10:31:20.583545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-18 10:31:20.583556 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-18 10:31:20.583567 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-18 10:31:20.583578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-18 10:31:20.583589 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-18 10:31:20.583599 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-18 10:31:20.583610 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-18 10:31:20.583621 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-18 10:31:20.583632 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-18 10:31:20.583651 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-18 10:31:20.583663 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-18 10:31:20.583674 | orchestrator | 2025-09-18 10:31:20.583685 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583696 | orchestrator | Thursday 18 September 2025 10:31:13 +0000 (0:00:00.355) 0:00:01.201 **** 2025-09-18 10:31:20.583707 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583739 | orchestrator | 2025-09-18 10:31:20.583753 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583766 | orchestrator | Thursday 18 September 2025 10:31:14 +0000 (0:00:00.514) 0:00:01.715 **** 2025-09-18 10:31:20.583778 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583790 | orchestrator | 2025-09-18 10:31:20.583803 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583814 | orchestrator | Thursday 18 September 2025 10:31:14 +0000 (0:00:00.207) 0:00:01.922 **** 2025-09-18 10:31:20.583825 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583836 | orchestrator | 2025-09-18 10:31:20.583847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583858 | orchestrator | Thursday 18 September 2025 10:31:14 +0000 (0:00:00.198) 0:00:02.121 **** 2025-09-18 10:31:20.583868 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583884 | orchestrator | 2025-09-18 10:31:20.583895 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583906 | orchestrator | Thursday 18 September 2025 10:31:14 +0000 (0:00:00.223) 0:00:02.345 **** 2025-09-18 10:31:20.583917 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583928 | orchestrator | 2025-09-18 10:31:20.583939 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583950 | orchestrator | Thursday 18 September 2025 10:31:14 +0000 (0:00:00.210) 0:00:02.556 **** 2025-09-18 10:31:20.583961 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.583972 | orchestrator | 2025-09-18 10:31:20.583983 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.583994 | orchestrator | Thursday 18 September 2025 10:31:15 +0000 (0:00:00.195) 0:00:02.751 **** 2025-09-18 10:31:20.584004 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584015 | orchestrator | 2025-09-18 10:31:20.584026 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584037 | orchestrator | Thursday 18 September 2025 10:31:15 +0000 (0:00:00.213) 0:00:02.965 **** 2025-09-18 10:31:20.584048 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584059 | orchestrator | 2025-09-18 10:31:20.584073 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584092 | orchestrator | Thursday 18 September 2025 10:31:15 +0000 (0:00:00.207) 0:00:03.173 **** 2025-09-18 10:31:20.584111 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500) 2025-09-18 10:31:20.584132 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500) 2025-09-18 10:31:20.584151 | orchestrator | 2025-09-18 10:31:20.584171 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584190 | orchestrator | Thursday 18 September 2025 10:31:15 +0000 (0:00:00.420) 0:00:03.594 **** 2025-09-18 10:31:20.584230 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2) 2025-09-18 10:31:20.584280 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2) 2025-09-18 10:31:20.584301 | orchestrator | 2025-09-18 10:31:20.584321 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584333 | orchestrator | Thursday 18 September 2025 10:31:16 +0000 (0:00:00.438) 0:00:04.033 **** 2025-09-18 10:31:20.584344 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040) 2025-09-18 10:31:20.584355 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040) 2025-09-18 10:31:20.584365 | orchestrator | 2025-09-18 10:31:20.584376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584387 | orchestrator | Thursday 18 September 2025 10:31:17 +0000 (0:00:00.635) 0:00:04.668 **** 2025-09-18 10:31:20.584397 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2) 2025-09-18 10:31:20.584419 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2) 2025-09-18 10:31:20.584430 | orchestrator | 2025-09-18 10:31:20.584441 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:20.584452 | orchestrator | Thursday 18 September 2025 10:31:17 +0000 (0:00:00.670) 0:00:05.339 **** 2025-09-18 10:31:20.584462 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:31:20.584473 | orchestrator | 2025-09-18 10:31:20.584484 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584501 | orchestrator | Thursday 18 September 2025 10:31:18 +0000 (0:00:00.819) 0:00:06.158 **** 2025-09-18 10:31:20.584512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-18 10:31:20.584523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-18 10:31:20.584533 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-18 10:31:20.584544 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-18 10:31:20.584555 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-18 10:31:20.584565 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-18 10:31:20.584576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-18 10:31:20.584587 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-18 10:31:20.584597 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-18 10:31:20.584608 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-18 10:31:20.584619 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-18 10:31:20.584629 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-18 10:31:20.584640 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-18 10:31:20.584651 | orchestrator | 2025-09-18 10:31:20.584661 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584672 | orchestrator | Thursday 18 September 2025 10:31:18 +0000 (0:00:00.395) 0:00:06.554 **** 2025-09-18 10:31:20.584683 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584694 | orchestrator | 2025-09-18 10:31:20.584704 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584715 | orchestrator | Thursday 18 September 2025 10:31:19 +0000 (0:00:00.216) 0:00:06.770 **** 2025-09-18 10:31:20.584725 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584736 | orchestrator | 2025-09-18 10:31:20.584747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584757 | orchestrator | Thursday 18 September 2025 10:31:19 +0000 (0:00:00.203) 0:00:06.974 **** 2025-09-18 10:31:20.584768 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584779 | orchestrator | 2025-09-18 10:31:20.584789 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584800 | orchestrator | Thursday 18 September 2025 10:31:19 +0000 (0:00:00.204) 0:00:07.179 **** 2025-09-18 10:31:20.584811 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584821 | orchestrator | 2025-09-18 10:31:20.584832 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584843 | orchestrator | Thursday 18 September 2025 10:31:19 +0000 (0:00:00.213) 0:00:07.393 **** 2025-09-18 10:31:20.584854 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584865 | orchestrator | 2025-09-18 10:31:20.584882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584892 | orchestrator | Thursday 18 September 2025 10:31:19 +0000 (0:00:00.198) 0:00:07.591 **** 2025-09-18 10:31:20.584903 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584914 | orchestrator | 2025-09-18 10:31:20.584924 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584935 | orchestrator | Thursday 18 September 2025 10:31:20 +0000 (0:00:00.221) 0:00:07.812 **** 2025-09-18 10:31:20.584945 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:20.584956 | orchestrator | 2025-09-18 10:31:20.584967 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:20.584978 | orchestrator | Thursday 18 September 2025 10:31:20 +0000 (0:00:00.192) 0:00:08.005 **** 2025-09-18 10:31:20.584997 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535300 | orchestrator | 2025-09-18 10:31:28.535418 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:28.535436 | orchestrator | Thursday 18 September 2025 10:31:20 +0000 (0:00:00.192) 0:00:08.198 **** 2025-09-18 10:31:28.535449 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-18 10:31:28.535462 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-18 10:31:28.535474 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-18 10:31:28.535485 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-18 10:31:28.535497 | orchestrator | 2025-09-18 10:31:28.535508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:28.535520 | orchestrator | Thursday 18 September 2025 10:31:21 +0000 (0:00:01.064) 0:00:09.262 **** 2025-09-18 10:31:28.535531 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535542 | orchestrator | 2025-09-18 10:31:28.535553 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:28.535565 | orchestrator | Thursday 18 September 2025 10:31:21 +0000 (0:00:00.209) 0:00:09.471 **** 2025-09-18 10:31:28.535575 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535587 | orchestrator | 2025-09-18 10:31:28.535598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:28.535609 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.199) 0:00:09.671 **** 2025-09-18 10:31:28.535620 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535631 | orchestrator | 2025-09-18 10:31:28.535642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:28.535653 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.225) 0:00:09.896 **** 2025-09-18 10:31:28.535664 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535675 | orchestrator | 2025-09-18 10:31:28.535686 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 10:31:28.535697 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.215) 0:00:10.112 **** 2025-09-18 10:31:28.535709 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-09-18 10:31:28.535720 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-09-18 10:31:28.535731 | orchestrator | 2025-09-18 10:31:28.535743 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 10:31:28.535754 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.187) 0:00:10.299 **** 2025-09-18 10:31:28.535785 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535799 | orchestrator | 2025-09-18 10:31:28.535811 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 10:31:28.535823 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.135) 0:00:10.434 **** 2025-09-18 10:31:28.535835 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535847 | orchestrator | 2025-09-18 10:31:28.535860 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 10:31:28.535872 | orchestrator | Thursday 18 September 2025 10:31:22 +0000 (0:00:00.128) 0:00:10.563 **** 2025-09-18 10:31:28.535885 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.535922 | orchestrator | 2025-09-18 10:31:28.535935 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 10:31:28.535948 | orchestrator | Thursday 18 September 2025 10:31:23 +0000 (0:00:00.189) 0:00:10.753 **** 2025-09-18 10:31:28.535961 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:28.535973 | orchestrator | 2025-09-18 10:31:28.535984 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 10:31:28.535995 | orchestrator | Thursday 18 September 2025 10:31:23 +0000 (0:00:00.140) 0:00:10.894 **** 2025-09-18 10:31:28.536006 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '727b3796-a5b5-597b-af2a-93b7c6d70a12'}}) 2025-09-18 10:31:28.536018 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}}) 2025-09-18 10:31:28.536029 | orchestrator | 2025-09-18 10:31:28.536040 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 10:31:28.536051 | orchestrator | Thursday 18 September 2025 10:31:23 +0000 (0:00:00.175) 0:00:11.069 **** 2025-09-18 10:31:28.536063 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '727b3796-a5b5-597b-af2a-93b7c6d70a12'}})  2025-09-18 10:31:28.536082 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}})  2025-09-18 10:31:28.536094 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536105 | orchestrator | 2025-09-18 10:31:28.536116 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 10:31:28.536128 | orchestrator | Thursday 18 September 2025 10:31:23 +0000 (0:00:00.152) 0:00:11.221 **** 2025-09-18 10:31:28.536138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '727b3796-a5b5-597b-af2a-93b7c6d70a12'}})  2025-09-18 10:31:28.536150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}})  2025-09-18 10:31:28.536161 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536172 | orchestrator | 2025-09-18 10:31:28.536183 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 10:31:28.536194 | orchestrator | Thursday 18 September 2025 10:31:23 +0000 (0:00:00.362) 0:00:11.584 **** 2025-09-18 10:31:28.536205 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '727b3796-a5b5-597b-af2a-93b7c6d70a12'}})  2025-09-18 10:31:28.536216 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}})  2025-09-18 10:31:28.536227 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536238 | orchestrator | 2025-09-18 10:31:28.536295 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 10:31:28.536307 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.145) 0:00:11.729 **** 2025-09-18 10:31:28.536318 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:28.536329 | orchestrator | 2025-09-18 10:31:28.536341 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 10:31:28.536357 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.146) 0:00:11.876 **** 2025-09-18 10:31:28.536369 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:31:28.536380 | orchestrator | 2025-09-18 10:31:28.536391 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 10:31:28.536402 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.146) 0:00:12.022 **** 2025-09-18 10:31:28.536413 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536425 | orchestrator | 2025-09-18 10:31:28.536436 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 10:31:28.536447 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.138) 0:00:12.160 **** 2025-09-18 10:31:28.536458 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536469 | orchestrator | 2025-09-18 10:31:28.536489 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 10:31:28.536500 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.136) 0:00:12.297 **** 2025-09-18 10:31:28.536511 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536522 | orchestrator | 2025-09-18 10:31:28.536533 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 10:31:28.536544 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.143) 0:00:12.440 **** 2025-09-18 10:31:28.536556 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:31:28.536567 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:28.536578 | orchestrator |  "sdb": { 2025-09-18 10:31:28.536589 | orchestrator |  "osd_lvm_uuid": "727b3796-a5b5-597b-af2a-93b7c6d70a12" 2025-09-18 10:31:28.536601 | orchestrator |  }, 2025-09-18 10:31:28.536612 | orchestrator |  "sdc": { 2025-09-18 10:31:28.536622 | orchestrator |  "osd_lvm_uuid": "9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f" 2025-09-18 10:31:28.536633 | orchestrator |  } 2025-09-18 10:31:28.536644 | orchestrator |  } 2025-09-18 10:31:28.536656 | orchestrator | } 2025-09-18 10:31:28.536667 | orchestrator | 2025-09-18 10:31:28.536678 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 10:31:28.536689 | orchestrator | Thursday 18 September 2025 10:31:24 +0000 (0:00:00.157) 0:00:12.597 **** 2025-09-18 10:31:28.536700 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536711 | orchestrator | 2025-09-18 10:31:28.536722 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 10:31:28.536733 | orchestrator | Thursday 18 September 2025 10:31:25 +0000 (0:00:00.136) 0:00:12.733 **** 2025-09-18 10:31:28.536744 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536755 | orchestrator | 2025-09-18 10:31:28.536766 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 10:31:28.536777 | orchestrator | Thursday 18 September 2025 10:31:25 +0000 (0:00:00.195) 0:00:12.929 **** 2025-09-18 10:31:28.536788 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:31:28.536798 | orchestrator | 2025-09-18 10:31:28.536809 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 10:31:28.536820 | orchestrator | Thursday 18 September 2025 10:31:25 +0000 (0:00:00.172) 0:00:13.101 **** 2025-09-18 10:31:28.536831 | orchestrator | changed: [testbed-node-3] => { 2025-09-18 10:31:28.536842 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 10:31:28.536854 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:28.536865 | orchestrator |  "sdb": { 2025-09-18 10:31:28.536876 | orchestrator |  "osd_lvm_uuid": "727b3796-a5b5-597b-af2a-93b7c6d70a12" 2025-09-18 10:31:28.536887 | orchestrator |  }, 2025-09-18 10:31:28.536898 | orchestrator |  "sdc": { 2025-09-18 10:31:28.536909 | orchestrator |  "osd_lvm_uuid": "9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f" 2025-09-18 10:31:28.536920 | orchestrator |  } 2025-09-18 10:31:28.536931 | orchestrator |  }, 2025-09-18 10:31:28.536942 | orchestrator |  "lvm_volumes": [ 2025-09-18 10:31:28.536953 | orchestrator |  { 2025-09-18 10:31:28.536964 | orchestrator |  "data": "osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12", 2025-09-18 10:31:28.536975 | orchestrator |  "data_vg": "ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12" 2025-09-18 10:31:28.536986 | orchestrator |  }, 2025-09-18 10:31:28.536997 | orchestrator |  { 2025-09-18 10:31:28.537008 | orchestrator |  "data": "osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f", 2025-09-18 10:31:28.537019 | orchestrator |  "data_vg": "ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f" 2025-09-18 10:31:28.537030 | orchestrator |  } 2025-09-18 10:31:28.537041 | orchestrator |  ] 2025-09-18 10:31:28.537052 | orchestrator |  } 2025-09-18 10:31:28.537062 | orchestrator | } 2025-09-18 10:31:28.537073 | orchestrator | 2025-09-18 10:31:28.537084 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 10:31:28.537107 | orchestrator | Thursday 18 September 2025 10:31:25 +0000 (0:00:00.214) 0:00:13.316 **** 2025-09-18 10:31:28.537119 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:28.537130 | orchestrator | 2025-09-18 10:31:28.537141 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 10:31:28.537152 | orchestrator | 2025-09-18 10:31:28.537163 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:31:28.537174 | orchestrator | Thursday 18 September 2025 10:31:27 +0000 (0:00:02.281) 0:00:15.598 **** 2025-09-18 10:31:28.537184 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:28.537195 | orchestrator | 2025-09-18 10:31:28.537206 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:31:28.537217 | orchestrator | Thursday 18 September 2025 10:31:28 +0000 (0:00:00.303) 0:00:15.902 **** 2025-09-18 10:31:28.537228 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:28.537239 | orchestrator | 2025-09-18 10:31:28.537267 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:28.537284 | orchestrator | Thursday 18 September 2025 10:31:28 +0000 (0:00:00.248) 0:00:16.150 **** 2025-09-18 10:31:36.598358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-18 10:31:36.598470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-18 10:31:36.598487 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-18 10:31:36.598508 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-18 10:31:36.598520 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-18 10:31:36.598531 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-18 10:31:36.598542 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-18 10:31:36.598553 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-18 10:31:36.598564 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-18 10:31:36.598576 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-18 10:31:36.598587 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-18 10:31:36.598598 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-18 10:31:36.598609 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-18 10:31:36.598624 | orchestrator | 2025-09-18 10:31:36.598636 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598648 | orchestrator | Thursday 18 September 2025 10:31:28 +0000 (0:00:00.379) 0:00:16.530 **** 2025-09-18 10:31:36.598660 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598672 | orchestrator | 2025-09-18 10:31:36.598684 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598695 | orchestrator | Thursday 18 September 2025 10:31:29 +0000 (0:00:00.204) 0:00:16.734 **** 2025-09-18 10:31:36.598706 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598717 | orchestrator | 2025-09-18 10:31:36.598728 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598739 | orchestrator | Thursday 18 September 2025 10:31:29 +0000 (0:00:00.209) 0:00:16.944 **** 2025-09-18 10:31:36.598750 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598761 | orchestrator | 2025-09-18 10:31:36.598773 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598784 | orchestrator | Thursday 18 September 2025 10:31:29 +0000 (0:00:00.224) 0:00:17.168 **** 2025-09-18 10:31:36.598795 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598827 | orchestrator | 2025-09-18 10:31:36.598839 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598850 | orchestrator | Thursday 18 September 2025 10:31:29 +0000 (0:00:00.200) 0:00:17.369 **** 2025-09-18 10:31:36.598861 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598874 | orchestrator | 2025-09-18 10:31:36.598886 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598898 | orchestrator | Thursday 18 September 2025 10:31:30 +0000 (0:00:00.635) 0:00:18.004 **** 2025-09-18 10:31:36.598910 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598923 | orchestrator | 2025-09-18 10:31:36.598935 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.598948 | orchestrator | Thursday 18 September 2025 10:31:30 +0000 (0:00:00.191) 0:00:18.195 **** 2025-09-18 10:31:36.598960 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.598972 | orchestrator | 2025-09-18 10:31:36.598999 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599012 | orchestrator | Thursday 18 September 2025 10:31:30 +0000 (0:00:00.219) 0:00:18.415 **** 2025-09-18 10:31:36.599024 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599037 | orchestrator | 2025-09-18 10:31:36.599049 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599061 | orchestrator | Thursday 18 September 2025 10:31:30 +0000 (0:00:00.199) 0:00:18.614 **** 2025-09-18 10:31:36.599074 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177) 2025-09-18 10:31:36.599087 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177) 2025-09-18 10:31:36.599099 | orchestrator | 2025-09-18 10:31:36.599111 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599124 | orchestrator | Thursday 18 September 2025 10:31:31 +0000 (0:00:00.444) 0:00:19.059 **** 2025-09-18 10:31:36.599136 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70) 2025-09-18 10:31:36.599149 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70) 2025-09-18 10:31:36.599161 | orchestrator | 2025-09-18 10:31:36.599174 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599186 | orchestrator | Thursday 18 September 2025 10:31:31 +0000 (0:00:00.446) 0:00:19.506 **** 2025-09-18 10:31:36.599198 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940) 2025-09-18 10:31:36.599210 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940) 2025-09-18 10:31:36.599223 | orchestrator | 2025-09-18 10:31:36.599234 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599264 | orchestrator | Thursday 18 September 2025 10:31:32 +0000 (0:00:00.453) 0:00:19.959 **** 2025-09-18 10:31:36.599293 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4) 2025-09-18 10:31:36.599305 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4) 2025-09-18 10:31:36.599316 | orchestrator | 2025-09-18 10:31:36.599327 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:36.599339 | orchestrator | Thursday 18 September 2025 10:31:32 +0000 (0:00:00.437) 0:00:20.397 **** 2025-09-18 10:31:36.599350 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:31:36.599360 | orchestrator | 2025-09-18 10:31:36.599371 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599383 | orchestrator | Thursday 18 September 2025 10:31:33 +0000 (0:00:00.347) 0:00:20.744 **** 2025-09-18 10:31:36.599393 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-18 10:31:36.599414 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-18 10:31:36.599425 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-18 10:31:36.599436 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-18 10:31:36.599447 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-18 10:31:36.599458 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-18 10:31:36.599469 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-18 10:31:36.599480 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-18 10:31:36.599491 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-18 10:31:36.599502 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-18 10:31:36.599513 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-18 10:31:36.599523 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-18 10:31:36.599534 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-18 10:31:36.599545 | orchestrator | 2025-09-18 10:31:36.599556 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599567 | orchestrator | Thursday 18 September 2025 10:31:33 +0000 (0:00:00.378) 0:00:21.123 **** 2025-09-18 10:31:36.599578 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599589 | orchestrator | 2025-09-18 10:31:36.599600 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599611 | orchestrator | Thursday 18 September 2025 10:31:33 +0000 (0:00:00.210) 0:00:21.334 **** 2025-09-18 10:31:36.599622 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599633 | orchestrator | 2025-09-18 10:31:36.599644 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599655 | orchestrator | Thursday 18 September 2025 10:31:34 +0000 (0:00:00.786) 0:00:22.120 **** 2025-09-18 10:31:36.599672 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599683 | orchestrator | 2025-09-18 10:31:36.599694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599705 | orchestrator | Thursday 18 September 2025 10:31:34 +0000 (0:00:00.213) 0:00:22.333 **** 2025-09-18 10:31:36.599716 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599727 | orchestrator | 2025-09-18 10:31:36.599738 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599749 | orchestrator | Thursday 18 September 2025 10:31:34 +0000 (0:00:00.200) 0:00:22.534 **** 2025-09-18 10:31:36.599760 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599771 | orchestrator | 2025-09-18 10:31:36.599782 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599793 | orchestrator | Thursday 18 September 2025 10:31:35 +0000 (0:00:00.221) 0:00:22.755 **** 2025-09-18 10:31:36.599804 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599815 | orchestrator | 2025-09-18 10:31:36.599826 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599837 | orchestrator | Thursday 18 September 2025 10:31:35 +0000 (0:00:00.185) 0:00:22.941 **** 2025-09-18 10:31:36.599848 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599859 | orchestrator | 2025-09-18 10:31:36.599870 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599881 | orchestrator | Thursday 18 September 2025 10:31:35 +0000 (0:00:00.204) 0:00:23.145 **** 2025-09-18 10:31:36.599892 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.599903 | orchestrator | 2025-09-18 10:31:36.599914 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.599931 | orchestrator | Thursday 18 September 2025 10:31:35 +0000 (0:00:00.201) 0:00:23.347 **** 2025-09-18 10:31:36.599942 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-18 10:31:36.599953 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-18 10:31:36.599964 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-18 10:31:36.599976 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-18 10:31:36.599987 | orchestrator | 2025-09-18 10:31:36.599998 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:36.600009 | orchestrator | Thursday 18 September 2025 10:31:36 +0000 (0:00:00.652) 0:00:23.999 **** 2025-09-18 10:31:36.600020 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:36.600031 | orchestrator | 2025-09-18 10:31:36.600049 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:42.712935 | orchestrator | Thursday 18 September 2025 10:31:36 +0000 (0:00:00.213) 0:00:24.212 **** 2025-09-18 10:31:42.713028 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713045 | orchestrator | 2025-09-18 10:31:42.713058 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:42.713069 | orchestrator | Thursday 18 September 2025 10:31:36 +0000 (0:00:00.214) 0:00:24.426 **** 2025-09-18 10:31:42.713080 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713091 | orchestrator | 2025-09-18 10:31:42.713102 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:42.713114 | orchestrator | Thursday 18 September 2025 10:31:36 +0000 (0:00:00.194) 0:00:24.621 **** 2025-09-18 10:31:42.713124 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713135 | orchestrator | 2025-09-18 10:31:42.713146 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 10:31:42.713157 | orchestrator | Thursday 18 September 2025 10:31:37 +0000 (0:00:00.211) 0:00:24.833 **** 2025-09-18 10:31:42.713168 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-09-18 10:31:42.713179 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-09-18 10:31:42.713190 | orchestrator | 2025-09-18 10:31:42.713202 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 10:31:42.713212 | orchestrator | Thursday 18 September 2025 10:31:37 +0000 (0:00:00.409) 0:00:25.242 **** 2025-09-18 10:31:42.713223 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713276 | orchestrator | 2025-09-18 10:31:42.713289 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 10:31:42.713300 | orchestrator | Thursday 18 September 2025 10:31:37 +0000 (0:00:00.136) 0:00:25.379 **** 2025-09-18 10:31:42.713312 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713323 | orchestrator | 2025-09-18 10:31:42.713334 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 10:31:42.713345 | orchestrator | Thursday 18 September 2025 10:31:37 +0000 (0:00:00.135) 0:00:25.514 **** 2025-09-18 10:31:42.713356 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713367 | orchestrator | 2025-09-18 10:31:42.713377 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 10:31:42.713388 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.138) 0:00:25.652 **** 2025-09-18 10:31:42.713399 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:42.713411 | orchestrator | 2025-09-18 10:31:42.713422 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 10:31:42.713433 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.142) 0:00:25.795 **** 2025-09-18 10:31:42.713444 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}}) 2025-09-18 10:31:42.713456 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a586834-03f6-5ee9-b58c-2d4644436c0e'}}) 2025-09-18 10:31:42.713467 | orchestrator | 2025-09-18 10:31:42.713478 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 10:31:42.713511 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.164) 0:00:25.959 **** 2025-09-18 10:31:42.713525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}})  2025-09-18 10:31:42.713539 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a586834-03f6-5ee9-b58c-2d4644436c0e'}})  2025-09-18 10:31:42.713551 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713564 | orchestrator | 2025-09-18 10:31:42.713576 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 10:31:42.713588 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.129) 0:00:26.088 **** 2025-09-18 10:31:42.713615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}})  2025-09-18 10:31:42.713628 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a586834-03f6-5ee9-b58c-2d4644436c0e'}})  2025-09-18 10:31:42.713640 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713652 | orchestrator | 2025-09-18 10:31:42.713664 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 10:31:42.713677 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.134) 0:00:26.223 **** 2025-09-18 10:31:42.713689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}})  2025-09-18 10:31:42.713701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a586834-03f6-5ee9-b58c-2d4644436c0e'}})  2025-09-18 10:31:42.713714 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713726 | orchestrator | 2025-09-18 10:31:42.713738 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 10:31:42.713750 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.133) 0:00:26.356 **** 2025-09-18 10:31:42.713762 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:42.713774 | orchestrator | 2025-09-18 10:31:42.713786 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 10:31:42.713799 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.122) 0:00:26.479 **** 2025-09-18 10:31:42.713811 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:31:42.713822 | orchestrator | 2025-09-18 10:31:42.713835 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 10:31:42.713847 | orchestrator | Thursday 18 September 2025 10:31:38 +0000 (0:00:00.127) 0:00:26.607 **** 2025-09-18 10:31:42.713857 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713868 | orchestrator | 2025-09-18 10:31:42.713896 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 10:31:42.713908 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.122) 0:00:26.729 **** 2025-09-18 10:31:42.713918 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713929 | orchestrator | 2025-09-18 10:31:42.713940 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 10:31:42.713952 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.273) 0:00:27.002 **** 2025-09-18 10:31:42.713963 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.713973 | orchestrator | 2025-09-18 10:31:42.713984 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 10:31:42.713995 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.115) 0:00:27.118 **** 2025-09-18 10:31:42.714006 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:31:42.714101 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:42.714114 | orchestrator |  "sdb": { 2025-09-18 10:31:42.714126 | orchestrator |  "osd_lvm_uuid": "f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7" 2025-09-18 10:31:42.714137 | orchestrator |  }, 2025-09-18 10:31:42.714148 | orchestrator |  "sdc": { 2025-09-18 10:31:42.714170 | orchestrator |  "osd_lvm_uuid": "7a586834-03f6-5ee9-b58c-2d4644436c0e" 2025-09-18 10:31:42.714181 | orchestrator |  } 2025-09-18 10:31:42.714192 | orchestrator |  } 2025-09-18 10:31:42.714203 | orchestrator | } 2025-09-18 10:31:42.714214 | orchestrator | 2025-09-18 10:31:42.714225 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 10:31:42.714254 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.119) 0:00:27.237 **** 2025-09-18 10:31:42.714265 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.714276 | orchestrator | 2025-09-18 10:31:42.714287 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 10:31:42.714298 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.116) 0:00:27.353 **** 2025-09-18 10:31:42.714309 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.714320 | orchestrator | 2025-09-18 10:31:42.714331 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 10:31:42.714341 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.122) 0:00:27.476 **** 2025-09-18 10:31:42.714352 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:31:42.714363 | orchestrator | 2025-09-18 10:31:42.714374 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 10:31:42.714385 | orchestrator | Thursday 18 September 2025 10:31:39 +0000 (0:00:00.111) 0:00:27.588 **** 2025-09-18 10:31:42.714395 | orchestrator | changed: [testbed-node-4] => { 2025-09-18 10:31:42.714407 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 10:31:42.714417 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:42.714428 | orchestrator |  "sdb": { 2025-09-18 10:31:42.714439 | orchestrator |  "osd_lvm_uuid": "f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7" 2025-09-18 10:31:42.714450 | orchestrator |  }, 2025-09-18 10:31:42.714461 | orchestrator |  "sdc": { 2025-09-18 10:31:42.714472 | orchestrator |  "osd_lvm_uuid": "7a586834-03f6-5ee9-b58c-2d4644436c0e" 2025-09-18 10:31:42.714483 | orchestrator |  } 2025-09-18 10:31:42.714494 | orchestrator |  }, 2025-09-18 10:31:42.714505 | orchestrator |  "lvm_volumes": [ 2025-09-18 10:31:42.714516 | orchestrator |  { 2025-09-18 10:31:42.714527 | orchestrator |  "data": "osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7", 2025-09-18 10:31:42.714538 | orchestrator |  "data_vg": "ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7" 2025-09-18 10:31:42.714548 | orchestrator |  }, 2025-09-18 10:31:42.714559 | orchestrator |  { 2025-09-18 10:31:42.714570 | orchestrator |  "data": "osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e", 2025-09-18 10:31:42.714581 | orchestrator |  "data_vg": "ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e" 2025-09-18 10:31:42.714591 | orchestrator |  } 2025-09-18 10:31:42.714602 | orchestrator |  ] 2025-09-18 10:31:42.714613 | orchestrator |  } 2025-09-18 10:31:42.714624 | orchestrator | } 2025-09-18 10:31:42.714635 | orchestrator | 2025-09-18 10:31:42.714646 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 10:31:42.714657 | orchestrator | Thursday 18 September 2025 10:31:40 +0000 (0:00:00.177) 0:00:27.765 **** 2025-09-18 10:31:42.714668 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:42.714679 | orchestrator | 2025-09-18 10:31:42.714689 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-09-18 10:31:42.714700 | orchestrator | 2025-09-18 10:31:42.714711 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:31:42.714722 | orchestrator | Thursday 18 September 2025 10:31:41 +0000 (0:00:00.944) 0:00:28.709 **** 2025-09-18 10:31:42.714733 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:42.714743 | orchestrator | 2025-09-18 10:31:42.714754 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:31:42.714765 | orchestrator | Thursday 18 September 2025 10:31:41 +0000 (0:00:00.399) 0:00:29.109 **** 2025-09-18 10:31:42.714783 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:42.714794 | orchestrator | 2025-09-18 10:31:42.714805 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:42.714816 | orchestrator | Thursday 18 September 2025 10:31:42 +0000 (0:00:00.789) 0:00:29.898 **** 2025-09-18 10:31:42.714833 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-18 10:31:42.714844 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-18 10:31:42.714855 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-18 10:31:42.714866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-18 10:31:42.714876 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-18 10:31:42.714887 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-18 10:31:42.714906 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-18 10:31:51.969562 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-18 10:31:51.969676 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-18 10:31:51.969691 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-18 10:31:51.969703 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-18 10:31:51.969714 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-18 10:31:51.969725 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-18 10:31:51.969737 | orchestrator | 2025-09-18 10:31:51.969748 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969760 | orchestrator | Thursday 18 September 2025 10:31:42 +0000 (0:00:00.424) 0:00:30.323 **** 2025-09-18 10:31:51.969772 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.969784 | orchestrator | 2025-09-18 10:31:51.969796 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969807 | orchestrator | Thursday 18 September 2025 10:31:43 +0000 (0:00:00.311) 0:00:30.634 **** 2025-09-18 10:31:51.969818 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.969829 | orchestrator | 2025-09-18 10:31:51.969840 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969851 | orchestrator | Thursday 18 September 2025 10:31:43 +0000 (0:00:00.212) 0:00:30.847 **** 2025-09-18 10:31:51.969862 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.969873 | orchestrator | 2025-09-18 10:31:51.969884 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969895 | orchestrator | Thursday 18 September 2025 10:31:43 +0000 (0:00:00.198) 0:00:31.045 **** 2025-09-18 10:31:51.969906 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.969917 | orchestrator | 2025-09-18 10:31:51.969928 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969939 | orchestrator | Thursday 18 September 2025 10:31:43 +0000 (0:00:00.206) 0:00:31.251 **** 2025-09-18 10:31:51.969950 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.969961 | orchestrator | 2025-09-18 10:31:51.969972 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.969983 | orchestrator | Thursday 18 September 2025 10:31:43 +0000 (0:00:00.217) 0:00:31.469 **** 2025-09-18 10:31:51.969994 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970005 | orchestrator | 2025-09-18 10:31:51.970076 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970092 | orchestrator | Thursday 18 September 2025 10:31:44 +0000 (0:00:00.207) 0:00:31.677 **** 2025-09-18 10:31:51.970104 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970140 | orchestrator | 2025-09-18 10:31:51.970153 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970165 | orchestrator | Thursday 18 September 2025 10:31:44 +0000 (0:00:00.217) 0:00:31.894 **** 2025-09-18 10:31:51.970177 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970189 | orchestrator | 2025-09-18 10:31:51.970201 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970213 | orchestrator | Thursday 18 September 2025 10:31:44 +0000 (0:00:00.230) 0:00:32.125 **** 2025-09-18 10:31:51.970248 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6) 2025-09-18 10:31:51.970263 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6) 2025-09-18 10:31:51.970275 | orchestrator | 2025-09-18 10:31:51.970287 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970299 | orchestrator | Thursday 18 September 2025 10:31:45 +0000 (0:00:00.659) 0:00:32.785 **** 2025-09-18 10:31:51.970311 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52) 2025-09-18 10:31:51.970323 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52) 2025-09-18 10:31:51.970335 | orchestrator | 2025-09-18 10:31:51.970347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970359 | orchestrator | Thursday 18 September 2025 10:31:46 +0000 (0:00:00.893) 0:00:33.678 **** 2025-09-18 10:31:51.970371 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836) 2025-09-18 10:31:51.970384 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836) 2025-09-18 10:31:51.970395 | orchestrator | 2025-09-18 10:31:51.970407 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970420 | orchestrator | Thursday 18 September 2025 10:31:46 +0000 (0:00:00.473) 0:00:34.152 **** 2025-09-18 10:31:51.970432 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64) 2025-09-18 10:31:51.970443 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64) 2025-09-18 10:31:51.970454 | orchestrator | 2025-09-18 10:31:51.970465 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:31:51.970476 | orchestrator | Thursday 18 September 2025 10:31:46 +0000 (0:00:00.464) 0:00:34.617 **** 2025-09-18 10:31:51.970487 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:31:51.970497 | orchestrator | 2025-09-18 10:31:51.970508 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970519 | orchestrator | Thursday 18 September 2025 10:31:47 +0000 (0:00:00.333) 0:00:34.950 **** 2025-09-18 10:31:51.970549 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-18 10:31:51.970561 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-18 10:31:51.970572 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-18 10:31:51.970583 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-18 10:31:51.970594 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-18 10:31:51.970604 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-18 10:31:51.970615 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-18 10:31:51.970626 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-18 10:31:51.970637 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-18 10:31:51.970674 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-18 10:31:51.970686 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-18 10:31:51.970697 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-18 10:31:51.970707 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-18 10:31:51.970718 | orchestrator | 2025-09-18 10:31:51.970729 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970740 | orchestrator | Thursday 18 September 2025 10:31:47 +0000 (0:00:00.413) 0:00:35.363 **** 2025-09-18 10:31:51.970751 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970761 | orchestrator | 2025-09-18 10:31:51.970772 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970783 | orchestrator | Thursday 18 September 2025 10:31:47 +0000 (0:00:00.201) 0:00:35.565 **** 2025-09-18 10:31:51.970794 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970805 | orchestrator | 2025-09-18 10:31:51.970815 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970827 | orchestrator | Thursday 18 September 2025 10:31:48 +0000 (0:00:00.228) 0:00:35.794 **** 2025-09-18 10:31:51.970837 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970848 | orchestrator | 2025-09-18 10:31:51.970864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970875 | orchestrator | Thursday 18 September 2025 10:31:48 +0000 (0:00:00.229) 0:00:36.023 **** 2025-09-18 10:31:51.970886 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970896 | orchestrator | 2025-09-18 10:31:51.970907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970918 | orchestrator | Thursday 18 September 2025 10:31:48 +0000 (0:00:00.253) 0:00:36.276 **** 2025-09-18 10:31:51.970929 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970939 | orchestrator | 2025-09-18 10:31:51.970950 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.970961 | orchestrator | Thursday 18 September 2025 10:31:48 +0000 (0:00:00.237) 0:00:36.514 **** 2025-09-18 10:31:51.970972 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.970982 | orchestrator | 2025-09-18 10:31:51.970993 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971004 | orchestrator | Thursday 18 September 2025 10:31:49 +0000 (0:00:00.876) 0:00:37.390 **** 2025-09-18 10:31:51.971014 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971025 | orchestrator | 2025-09-18 10:31:51.971036 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971046 | orchestrator | Thursday 18 September 2025 10:31:50 +0000 (0:00:00.238) 0:00:37.629 **** 2025-09-18 10:31:51.971057 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971068 | orchestrator | 2025-09-18 10:31:51.971078 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971089 | orchestrator | Thursday 18 September 2025 10:31:50 +0000 (0:00:00.212) 0:00:37.842 **** 2025-09-18 10:31:51.971100 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-18 10:31:51.971111 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-18 10:31:51.971122 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-18 10:31:51.971133 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-18 10:31:51.971144 | orchestrator | 2025-09-18 10:31:51.971155 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971166 | orchestrator | Thursday 18 September 2025 10:31:51 +0000 (0:00:00.841) 0:00:38.683 **** 2025-09-18 10:31:51.971176 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971187 | orchestrator | 2025-09-18 10:31:51.971198 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971215 | orchestrator | Thursday 18 September 2025 10:31:51 +0000 (0:00:00.228) 0:00:38.912 **** 2025-09-18 10:31:51.971252 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971264 | orchestrator | 2025-09-18 10:31:51.971275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971286 | orchestrator | Thursday 18 September 2025 10:31:51 +0000 (0:00:00.237) 0:00:39.150 **** 2025-09-18 10:31:51.971296 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971307 | orchestrator | 2025-09-18 10:31:51.971318 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:31:51.971329 | orchestrator | Thursday 18 September 2025 10:31:51 +0000 (0:00:00.211) 0:00:39.361 **** 2025-09-18 10:31:51.971340 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:51.971350 | orchestrator | 2025-09-18 10:31:51.971361 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-09-18 10:31:51.971378 | orchestrator | Thursday 18 September 2025 10:31:51 +0000 (0:00:00.218) 0:00:39.579 **** 2025-09-18 10:31:56.115924 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-09-18 10:31:56.115984 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-09-18 10:31:56.115992 | orchestrator | 2025-09-18 10:31:56.115999 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-09-18 10:31:56.116004 | orchestrator | Thursday 18 September 2025 10:31:52 +0000 (0:00:00.226) 0:00:39.806 **** 2025-09-18 10:31:56.116010 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116016 | orchestrator | 2025-09-18 10:31:56.116022 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-09-18 10:31:56.116027 | orchestrator | Thursday 18 September 2025 10:31:52 +0000 (0:00:00.137) 0:00:39.944 **** 2025-09-18 10:31:56.116033 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116038 | orchestrator | 2025-09-18 10:31:56.116044 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-09-18 10:31:56.116049 | orchestrator | Thursday 18 September 2025 10:31:52 +0000 (0:00:00.149) 0:00:40.094 **** 2025-09-18 10:31:56.116055 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116060 | orchestrator | 2025-09-18 10:31:56.116066 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-09-18 10:31:56.116071 | orchestrator | Thursday 18 September 2025 10:31:52 +0000 (0:00:00.135) 0:00:40.230 **** 2025-09-18 10:31:56.116077 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:56.116083 | orchestrator | 2025-09-18 10:31:56.116088 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-09-18 10:31:56.116094 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.429) 0:00:40.660 **** 2025-09-18 10:31:56.116100 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47a403a8-a225-5ee6-9198-c4852ee3470e'}}) 2025-09-18 10:31:56.116106 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a661e8c0-0419-5fc2-afc1-c6737c299168'}}) 2025-09-18 10:31:56.116111 | orchestrator | 2025-09-18 10:31:56.116117 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-09-18 10:31:56.116122 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.221) 0:00:40.881 **** 2025-09-18 10:31:56.116128 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47a403a8-a225-5ee6-9198-c4852ee3470e'}})  2025-09-18 10:31:56.116135 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a661e8c0-0419-5fc2-afc1-c6737c299168'}})  2025-09-18 10:31:56.116141 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116146 | orchestrator | 2025-09-18 10:31:56.116152 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-09-18 10:31:56.116158 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.176) 0:00:41.057 **** 2025-09-18 10:31:56.116163 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47a403a8-a225-5ee6-9198-c4852ee3470e'}})  2025-09-18 10:31:56.116184 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a661e8c0-0419-5fc2-afc1-c6737c299168'}})  2025-09-18 10:31:56.116190 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116195 | orchestrator | 2025-09-18 10:31:56.116200 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-09-18 10:31:56.116206 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.178) 0:00:41.236 **** 2025-09-18 10:31:56.116211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47a403a8-a225-5ee6-9198-c4852ee3470e'}})  2025-09-18 10:31:56.116217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a661e8c0-0419-5fc2-afc1-c6737c299168'}})  2025-09-18 10:31:56.116252 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116259 | orchestrator | 2025-09-18 10:31:56.116264 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-09-18 10:31:56.116270 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.202) 0:00:41.438 **** 2025-09-18 10:31:56.116275 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:56.116281 | orchestrator | 2025-09-18 10:31:56.116296 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-09-18 10:31:56.116302 | orchestrator | Thursday 18 September 2025 10:31:53 +0000 (0:00:00.147) 0:00:41.585 **** 2025-09-18 10:31:56.116307 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:31:56.116313 | orchestrator | 2025-09-18 10:31:56.116318 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-09-18 10:31:56.116324 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.108) 0:00:41.694 **** 2025-09-18 10:31:56.116329 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116335 | orchestrator | 2025-09-18 10:31:56.116340 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-09-18 10:31:56.116346 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.118) 0:00:41.813 **** 2025-09-18 10:31:56.116351 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116356 | orchestrator | 2025-09-18 10:31:56.116362 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-09-18 10:31:56.116367 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.108) 0:00:41.921 **** 2025-09-18 10:31:56.116373 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116378 | orchestrator | 2025-09-18 10:31:56.116384 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-09-18 10:31:56.116389 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.132) 0:00:42.054 **** 2025-09-18 10:31:56.116395 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:31:56.116400 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:56.116406 | orchestrator |  "sdb": { 2025-09-18 10:31:56.116412 | orchestrator |  "osd_lvm_uuid": "47a403a8-a225-5ee6-9198-c4852ee3470e" 2025-09-18 10:31:56.116428 | orchestrator |  }, 2025-09-18 10:31:56.116434 | orchestrator |  "sdc": { 2025-09-18 10:31:56.116440 | orchestrator |  "osd_lvm_uuid": "a661e8c0-0419-5fc2-afc1-c6737c299168" 2025-09-18 10:31:56.116446 | orchestrator |  } 2025-09-18 10:31:56.116451 | orchestrator |  } 2025-09-18 10:31:56.116457 | orchestrator | } 2025-09-18 10:31:56.116463 | orchestrator | 2025-09-18 10:31:56.116468 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-09-18 10:31:56.116474 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.123) 0:00:42.177 **** 2025-09-18 10:31:56.116479 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116485 | orchestrator | 2025-09-18 10:31:56.116490 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-09-18 10:31:56.116495 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.099) 0:00:42.276 **** 2025-09-18 10:31:56.116501 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116509 | orchestrator | 2025-09-18 10:31:56.116517 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-09-18 10:31:56.116534 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.229) 0:00:42.505 **** 2025-09-18 10:31:56.116542 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:31:56.116551 | orchestrator | 2025-09-18 10:31:56.116560 | orchestrator | TASK [Print configuration data] ************************************************ 2025-09-18 10:31:56.116569 | orchestrator | Thursday 18 September 2025 10:31:54 +0000 (0:00:00.095) 0:00:42.600 **** 2025-09-18 10:31:56.116577 | orchestrator | changed: [testbed-node-5] => { 2025-09-18 10:31:56.116585 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-09-18 10:31:56.116594 | orchestrator |  "ceph_osd_devices": { 2025-09-18 10:31:56.116604 | orchestrator |  "sdb": { 2025-09-18 10:31:56.116612 | orchestrator |  "osd_lvm_uuid": "47a403a8-a225-5ee6-9198-c4852ee3470e" 2025-09-18 10:31:56.116622 | orchestrator |  }, 2025-09-18 10:31:56.116631 | orchestrator |  "sdc": { 2025-09-18 10:31:56.116639 | orchestrator |  "osd_lvm_uuid": "a661e8c0-0419-5fc2-afc1-c6737c299168" 2025-09-18 10:31:56.116648 | orchestrator |  } 2025-09-18 10:31:56.116656 | orchestrator |  }, 2025-09-18 10:31:56.116665 | orchestrator |  "lvm_volumes": [ 2025-09-18 10:31:56.116673 | orchestrator |  { 2025-09-18 10:31:56.116681 | orchestrator |  "data": "osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e", 2025-09-18 10:31:56.116690 | orchestrator |  "data_vg": "ceph-47a403a8-a225-5ee6-9198-c4852ee3470e" 2025-09-18 10:31:56.116698 | orchestrator |  }, 2025-09-18 10:31:56.116707 | orchestrator |  { 2025-09-18 10:31:56.116715 | orchestrator |  "data": "osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168", 2025-09-18 10:31:56.116724 | orchestrator |  "data_vg": "ceph-a661e8c0-0419-5fc2-afc1-c6737c299168" 2025-09-18 10:31:56.116732 | orchestrator |  } 2025-09-18 10:31:56.116738 | orchestrator |  ] 2025-09-18 10:31:56.116745 | orchestrator |  } 2025-09-18 10:31:56.116755 | orchestrator | } 2025-09-18 10:31:56.116763 | orchestrator | 2025-09-18 10:31:56.116772 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-09-18 10:31:56.116780 | orchestrator | Thursday 18 September 2025 10:31:55 +0000 (0:00:00.198) 0:00:42.799 **** 2025-09-18 10:31:56.116789 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 10:31:56.116797 | orchestrator | 2025-09-18 10:31:56.116806 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:31:56.116814 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:31:56.116824 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:31:56.116832 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:31:56.116840 | orchestrator | 2025-09-18 10:31:56.116849 | orchestrator | 2025-09-18 10:31:56.116858 | orchestrator | 2025-09-18 10:31:56.116865 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:31:56.116870 | orchestrator | Thursday 18 September 2025 10:31:56 +0000 (0:00:00.929) 0:00:43.729 **** 2025-09-18 10:31:56.116876 | orchestrator | =============================================================================== 2025-09-18 10:31:56.116881 | orchestrator | Write configuration file ------------------------------------------------ 4.16s 2025-09-18 10:31:56.116887 | orchestrator | Get initial list of available block devices ----------------------------- 1.28s 2025-09-18 10:31:56.116892 | orchestrator | Add known partitions to the list of available block devices ------------- 1.19s 2025-09-18 10:31:56.116897 | orchestrator | Add known links to the list of available block devices ------------------ 1.16s 2025-09-18 10:31:56.116903 | orchestrator | Add known partitions to the list of available block devices ------------- 1.06s 2025-09-18 10:31:56.116913 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.98s 2025-09-18 10:31:56.116918 | orchestrator | Add known links to the list of available block devices ------------------ 0.89s 2025-09-18 10:31:56.116923 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-09-18 10:31:56.116929 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-09-18 10:31:56.116934 | orchestrator | Set UUIDs for OSD VGs/LVs ----------------------------------------------- 0.82s 2025-09-18 10:31:56.116940 | orchestrator | Add known links to the list of available block devices ------------------ 0.82s 2025-09-18 10:31:56.116945 | orchestrator | Add known partitions to the list of available block devices ------------- 0.79s 2025-09-18 10:31:56.116950 | orchestrator | Define lvm_volumes structures ------------------------------------------- 0.71s 2025-09-18 10:31:56.116956 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2025-09-18 10:31:56.116966 | orchestrator | Add known links to the list of available block devices ------------------ 0.67s 2025-09-18 10:31:56.366977 | orchestrator | Add known links to the list of available block devices ------------------ 0.66s 2025-09-18 10:31:56.367015 | orchestrator | Add known partitions to the list of available block devices ------------- 0.65s 2025-09-18 10:31:56.367020 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-18 10:31:56.367024 | orchestrator | Add known links to the list of available block devices ------------------ 0.64s 2025-09-18 10:31:56.367028 | orchestrator | Print configuration data ------------------------------------------------ 0.59s 2025-09-18 10:32:18.909116 | orchestrator | 2025-09-18 10:32:18 | INFO  | Task 14d12b36-4b66-4755-8005-97fa3bc06d11 (sync inventory) is running in background. Output coming soon. 2025-09-18 10:32:45.908774 | orchestrator | 2025-09-18 10:32:20 | INFO  | Starting group_vars file reorganization 2025-09-18 10:32:45.908861 | orchestrator | 2025-09-18 10:32:20 | INFO  | Moved 0 file(s) to their respective directories 2025-09-18 10:32:45.908872 | orchestrator | 2025-09-18 10:32:20 | INFO  | Group_vars file reorganization completed 2025-09-18 10:32:45.908879 | orchestrator | 2025-09-18 10:32:22 | INFO  | Starting variable preparation from inventory 2025-09-18 10:32:45.908888 | orchestrator | 2025-09-18 10:32:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-09-18 10:32:45.908898 | orchestrator | 2025-09-18 10:32:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-09-18 10:32:45.908909 | orchestrator | 2025-09-18 10:32:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-09-18 10:32:45.908920 | orchestrator | 2025-09-18 10:32:26 | INFO  | 3 file(s) written, 6 host(s) processed 2025-09-18 10:32:45.908930 | orchestrator | 2025-09-18 10:32:26 | INFO  | Variable preparation completed 2025-09-18 10:32:45.908941 | orchestrator | 2025-09-18 10:32:28 | INFO  | Starting inventory overwrite handling 2025-09-18 10:32:45.908951 | orchestrator | 2025-09-18 10:32:28 | INFO  | Handling group overwrites in 99-overwrite 2025-09-18 10:32:45.908986 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group frr:children from 60-generic 2025-09-18 10:32:45.908998 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group storage:children from 50-kolla 2025-09-18 10:32:45.909008 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group netbird:children from 50-infrastruture 2025-09-18 10:32:45.909017 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group ceph-rgw from 50-ceph 2025-09-18 10:32:45.909028 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group ceph-mds from 50-ceph 2025-09-18 10:32:45.909038 | orchestrator | 2025-09-18 10:32:28 | INFO  | Handling group overwrites in 20-roles 2025-09-18 10:32:45.909049 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removing group k3s_node from 50-infrastruture 2025-09-18 10:32:45.909083 | orchestrator | 2025-09-18 10:32:28 | INFO  | Removed 6 group(s) in total 2025-09-18 10:32:45.909095 | orchestrator | 2025-09-18 10:32:28 | INFO  | Inventory overwrite handling completed 2025-09-18 10:32:45.909106 | orchestrator | 2025-09-18 10:32:29 | INFO  | Starting merge of inventory files 2025-09-18 10:32:45.909118 | orchestrator | 2025-09-18 10:32:29 | INFO  | Inventory files merged successfully 2025-09-18 10:32:45.909129 | orchestrator | 2025-09-18 10:32:35 | INFO  | Generating ClusterShell configuration from Ansible inventory 2025-09-18 10:32:45.909140 | orchestrator | 2025-09-18 10:32:44 | INFO  | Successfully wrote ClusterShell configuration 2025-09-18 10:32:45.909151 | orchestrator | [master a88bced] 2025-09-18-10-32 2025-09-18 10:32:45.909164 | orchestrator | 1 file changed, 30 insertions(+), 9 deletions(-) 2025-09-18 10:32:48.435512 | orchestrator | 2025-09-18 10:32:48 | INFO  | Task c7736c65-ddfd-4434-af7a-66f156a676dd (ceph-create-lvm-devices) was prepared for execution. 2025-09-18 10:32:48.435597 | orchestrator | 2025-09-18 10:32:48 | INFO  | It takes a moment until task c7736c65-ddfd-4434-af7a-66f156a676dd (ceph-create-lvm-devices) has been started and output is visible here. 2025-09-18 10:33:01.495736 | orchestrator | 2025-09-18 10:33:01.495890 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 10:33:01.495909 | orchestrator | 2025-09-18 10:33:01.495922 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:33:01.495935 | orchestrator | Thursday 18 September 2025 10:32:52 +0000 (0:00:00.365) 0:00:00.365 **** 2025-09-18 10:33:01.495948 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-09-18 10:33:01.495959 | orchestrator | 2025-09-18 10:33:01.495971 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:33:01.495982 | orchestrator | Thursday 18 September 2025 10:32:53 +0000 (0:00:00.281) 0:00:00.647 **** 2025-09-18 10:33:01.495993 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:01.496006 | orchestrator | 2025-09-18 10:33:01.496017 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496028 | orchestrator | Thursday 18 September 2025 10:32:53 +0000 (0:00:00.253) 0:00:00.900 **** 2025-09-18 10:33:01.496040 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-09-18 10:33:01.496053 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-09-18 10:33:01.496064 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-09-18 10:33:01.496075 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-09-18 10:33:01.496086 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-09-18 10:33:01.496098 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-09-18 10:33:01.496108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-09-18 10:33:01.496120 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-09-18 10:33:01.496131 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-09-18 10:33:01.496142 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-09-18 10:33:01.496153 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-09-18 10:33:01.496341 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-09-18 10:33:01.496466 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-09-18 10:33:01.496480 | orchestrator | 2025-09-18 10:33:01.496492 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496535 | orchestrator | Thursday 18 September 2025 10:32:53 +0000 (0:00:00.460) 0:00:01.361 **** 2025-09-18 10:33:01.496545 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496557 | orchestrator | 2025-09-18 10:33:01.496567 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496576 | orchestrator | Thursday 18 September 2025 10:32:54 +0000 (0:00:00.506) 0:00:01.867 **** 2025-09-18 10:33:01.496585 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496594 | orchestrator | 2025-09-18 10:33:01.496603 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496612 | orchestrator | Thursday 18 September 2025 10:32:54 +0000 (0:00:00.262) 0:00:02.130 **** 2025-09-18 10:33:01.496621 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496630 | orchestrator | 2025-09-18 10:33:01.496638 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496647 | orchestrator | Thursday 18 September 2025 10:32:54 +0000 (0:00:00.189) 0:00:02.320 **** 2025-09-18 10:33:01.496656 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496665 | orchestrator | 2025-09-18 10:33:01.496673 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496682 | orchestrator | Thursday 18 September 2025 10:32:55 +0000 (0:00:00.237) 0:00:02.558 **** 2025-09-18 10:33:01.496691 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496699 | orchestrator | 2025-09-18 10:33:01.496708 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496717 | orchestrator | Thursday 18 September 2025 10:32:55 +0000 (0:00:00.266) 0:00:02.824 **** 2025-09-18 10:33:01.496726 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496734 | orchestrator | 2025-09-18 10:33:01.496743 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496752 | orchestrator | Thursday 18 September 2025 10:32:55 +0000 (0:00:00.237) 0:00:03.062 **** 2025-09-18 10:33:01.496760 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496769 | orchestrator | 2025-09-18 10:33:01.496778 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496787 | orchestrator | Thursday 18 September 2025 10:32:55 +0000 (0:00:00.233) 0:00:03.295 **** 2025-09-18 10:33:01.496795 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.496804 | orchestrator | 2025-09-18 10:33:01.496813 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496821 | orchestrator | Thursday 18 September 2025 10:32:56 +0000 (0:00:00.225) 0:00:03.520 **** 2025-09-18 10:33:01.496831 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500) 2025-09-18 10:33:01.496841 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500) 2025-09-18 10:33:01.496849 | orchestrator | 2025-09-18 10:33:01.496858 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496867 | orchestrator | Thursday 18 September 2025 10:32:56 +0000 (0:00:00.462) 0:00:03.983 **** 2025-09-18 10:33:01.496906 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2) 2025-09-18 10:33:01.496916 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2) 2025-09-18 10:33:01.496925 | orchestrator | 2025-09-18 10:33:01.496934 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496943 | orchestrator | Thursday 18 September 2025 10:32:57 +0000 (0:00:00.424) 0:00:04.407 **** 2025-09-18 10:33:01.496951 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040) 2025-09-18 10:33:01.496960 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040) 2025-09-18 10:33:01.496969 | orchestrator | 2025-09-18 10:33:01.496978 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.496993 | orchestrator | Thursday 18 September 2025 10:32:57 +0000 (0:00:00.711) 0:00:05.119 **** 2025-09-18 10:33:01.497002 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2) 2025-09-18 10:33:01.497011 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2) 2025-09-18 10:33:01.497020 | orchestrator | 2025-09-18 10:33:01.497028 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:01.497037 | orchestrator | Thursday 18 September 2025 10:32:58 +0000 (0:00:01.019) 0:00:06.138 **** 2025-09-18 10:33:01.497046 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:33:01.497055 | orchestrator | 2025-09-18 10:33:01.497064 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497072 | orchestrator | Thursday 18 September 2025 10:32:59 +0000 (0:00:00.397) 0:00:06.535 **** 2025-09-18 10:33:01.497081 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-09-18 10:33:01.497090 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-09-18 10:33:01.497099 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-09-18 10:33:01.497107 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-09-18 10:33:01.497116 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-09-18 10:33:01.497124 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-09-18 10:33:01.497133 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-09-18 10:33:01.497142 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-09-18 10:33:01.497193 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-09-18 10:33:01.497203 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-09-18 10:33:01.497212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-09-18 10:33:01.497221 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-09-18 10:33:01.497234 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-09-18 10:33:01.497243 | orchestrator | 2025-09-18 10:33:01.497252 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497261 | orchestrator | Thursday 18 September 2025 10:32:59 +0000 (0:00:00.532) 0:00:07.068 **** 2025-09-18 10:33:01.497270 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497279 | orchestrator | 2025-09-18 10:33:01.497288 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497296 | orchestrator | Thursday 18 September 2025 10:32:59 +0000 (0:00:00.230) 0:00:07.298 **** 2025-09-18 10:33:01.497305 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497314 | orchestrator | 2025-09-18 10:33:01.497323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497332 | orchestrator | Thursday 18 September 2025 10:33:00 +0000 (0:00:00.236) 0:00:07.535 **** 2025-09-18 10:33:01.497340 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497349 | orchestrator | 2025-09-18 10:33:01.497358 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497367 | orchestrator | Thursday 18 September 2025 10:33:00 +0000 (0:00:00.218) 0:00:07.754 **** 2025-09-18 10:33:01.497375 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497384 | orchestrator | 2025-09-18 10:33:01.497393 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497408 | orchestrator | Thursday 18 September 2025 10:33:00 +0000 (0:00:00.226) 0:00:07.981 **** 2025-09-18 10:33:01.497417 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497425 | orchestrator | 2025-09-18 10:33:01.497434 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497443 | orchestrator | Thursday 18 September 2025 10:33:00 +0000 (0:00:00.275) 0:00:08.256 **** 2025-09-18 10:33:01.497452 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497460 | orchestrator | 2025-09-18 10:33:01.497469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497478 | orchestrator | Thursday 18 September 2025 10:33:01 +0000 (0:00:00.211) 0:00:08.468 **** 2025-09-18 10:33:01.497487 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:01.497496 | orchestrator | 2025-09-18 10:33:01.497504 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:01.497513 | orchestrator | Thursday 18 September 2025 10:33:01 +0000 (0:00:00.201) 0:00:08.670 **** 2025-09-18 10:33:01.497528 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.604714 | orchestrator | 2025-09-18 10:33:10.604852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:10.604869 | orchestrator | Thursday 18 September 2025 10:33:01 +0000 (0:00:00.214) 0:00:08.885 **** 2025-09-18 10:33:10.604880 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-09-18 10:33:10.604893 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-09-18 10:33:10.604904 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-09-18 10:33:10.604914 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-09-18 10:33:10.604924 | orchestrator | 2025-09-18 10:33:10.604934 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:10.604945 | orchestrator | Thursday 18 September 2025 10:33:02 +0000 (0:00:01.475) 0:00:10.361 **** 2025-09-18 10:33:10.604955 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.604964 | orchestrator | 2025-09-18 10:33:10.604974 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:10.604984 | orchestrator | Thursday 18 September 2025 10:33:03 +0000 (0:00:00.239) 0:00:10.601 **** 2025-09-18 10:33:10.604994 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605003 | orchestrator | 2025-09-18 10:33:10.605013 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:10.605023 | orchestrator | Thursday 18 September 2025 10:33:03 +0000 (0:00:00.235) 0:00:10.836 **** 2025-09-18 10:33:10.605032 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605042 | orchestrator | 2025-09-18 10:33:10.605052 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:10.605062 | orchestrator | Thursday 18 September 2025 10:33:03 +0000 (0:00:00.245) 0:00:11.081 **** 2025-09-18 10:33:10.605071 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605081 | orchestrator | 2025-09-18 10:33:10.605091 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 10:33:10.605100 | orchestrator | Thursday 18 September 2025 10:33:03 +0000 (0:00:00.232) 0:00:11.314 **** 2025-09-18 10:33:10.605110 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605119 | orchestrator | 2025-09-18 10:33:10.605129 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 10:33:10.605139 | orchestrator | Thursday 18 September 2025 10:33:04 +0000 (0:00:00.157) 0:00:11.472 **** 2025-09-18 10:33:10.605149 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '727b3796-a5b5-597b-af2a-93b7c6d70a12'}}) 2025-09-18 10:33:10.605195 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}}) 2025-09-18 10:33:10.605205 | orchestrator | 2025-09-18 10:33:10.605215 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 10:33:10.605226 | orchestrator | Thursday 18 September 2025 10:33:04 +0000 (0:00:00.182) 0:00:11.654 **** 2025-09-18 10:33:10.605239 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'}) 2025-09-18 10:33:10.605277 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}) 2025-09-18 10:33:10.605288 | orchestrator | 2025-09-18 10:33:10.605299 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 10:33:10.605328 | orchestrator | Thursday 18 September 2025 10:33:06 +0000 (0:00:02.092) 0:00:13.747 **** 2025-09-18 10:33:10.605339 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605352 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605363 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605374 | orchestrator | 2025-09-18 10:33:10.605385 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 10:33:10.605396 | orchestrator | Thursday 18 September 2025 10:33:06 +0000 (0:00:00.174) 0:00:13.922 **** 2025-09-18 10:33:10.605407 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'}) 2025-09-18 10:33:10.605418 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}) 2025-09-18 10:33:10.605429 | orchestrator | 2025-09-18 10:33:10.605440 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 10:33:10.605451 | orchestrator | Thursday 18 September 2025 10:33:08 +0000 (0:00:01.712) 0:00:15.634 **** 2025-09-18 10:33:10.605462 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605473 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605484 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605495 | orchestrator | 2025-09-18 10:33:10.605506 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 10:33:10.605516 | orchestrator | Thursday 18 September 2025 10:33:08 +0000 (0:00:00.144) 0:00:15.779 **** 2025-09-18 10:33:10.605526 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605535 | orchestrator | 2025-09-18 10:33:10.605545 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 10:33:10.605576 | orchestrator | Thursday 18 September 2025 10:33:08 +0000 (0:00:00.132) 0:00:15.912 **** 2025-09-18 10:33:10.605586 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605597 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605607 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605616 | orchestrator | 2025-09-18 10:33:10.605626 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 10:33:10.605636 | orchestrator | Thursday 18 September 2025 10:33:08 +0000 (0:00:00.409) 0:00:16.321 **** 2025-09-18 10:33:10.605646 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605656 | orchestrator | 2025-09-18 10:33:10.605665 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 10:33:10.605675 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.154) 0:00:16.476 **** 2025-09-18 10:33:10.605685 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605704 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605715 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605724 | orchestrator | 2025-09-18 10:33:10.605734 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 10:33:10.605744 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.192) 0:00:16.668 **** 2025-09-18 10:33:10.605753 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605763 | orchestrator | 2025-09-18 10:33:10.605773 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 10:33:10.605783 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.204) 0:00:16.873 **** 2025-09-18 10:33:10.605793 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605803 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605812 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605822 | orchestrator | 2025-09-18 10:33:10.605832 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 10:33:10.605842 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.174) 0:00:17.048 **** 2025-09-18 10:33:10.605852 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:10.605862 | orchestrator | 2025-09-18 10:33:10.605872 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 10:33:10.605881 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.139) 0:00:17.188 **** 2025-09-18 10:33:10.605891 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605901 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605912 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605921 | orchestrator | 2025-09-18 10:33:10.605931 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 10:33:10.605949 | orchestrator | Thursday 18 September 2025 10:33:09 +0000 (0:00:00.153) 0:00:17.341 **** 2025-09-18 10:33:10.605959 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.605969 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.605979 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.605989 | orchestrator | 2025-09-18 10:33:10.605999 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 10:33:10.606009 | orchestrator | Thursday 18 September 2025 10:33:10 +0000 (0:00:00.177) 0:00:17.518 **** 2025-09-18 10:33:10.606093 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:10.606106 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:10.606116 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.606125 | orchestrator | 2025-09-18 10:33:10.606135 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 10:33:10.606145 | orchestrator | Thursday 18 September 2025 10:33:10 +0000 (0:00:00.182) 0:00:17.701 **** 2025-09-18 10:33:10.606173 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.606191 | orchestrator | 2025-09-18 10:33:10.606201 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 10:33:10.606210 | orchestrator | Thursday 18 September 2025 10:33:10 +0000 (0:00:00.159) 0:00:17.861 **** 2025-09-18 10:33:10.606220 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:10.606230 | orchestrator | 2025-09-18 10:33:10.606247 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 10:33:17.777965 | orchestrator | Thursday 18 September 2025 10:33:10 +0000 (0:00:00.132) 0:00:17.994 **** 2025-09-18 10:33:17.778076 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778085 | orchestrator | 2025-09-18 10:33:17.778090 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 10:33:17.778095 | orchestrator | Thursday 18 September 2025 10:33:10 +0000 (0:00:00.150) 0:00:18.144 **** 2025-09-18 10:33:17.778099 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:33:17.778104 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 10:33:17.778108 | orchestrator | } 2025-09-18 10:33:17.778112 | orchestrator | 2025-09-18 10:33:17.778117 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 10:33:17.778121 | orchestrator | Thursday 18 September 2025 10:33:11 +0000 (0:00:00.726) 0:00:18.871 **** 2025-09-18 10:33:17.778125 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:33:17.778129 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 10:33:17.778133 | orchestrator | } 2025-09-18 10:33:17.778137 | orchestrator | 2025-09-18 10:33:17.778141 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 10:33:17.778145 | orchestrator | Thursday 18 September 2025 10:33:11 +0000 (0:00:00.208) 0:00:19.079 **** 2025-09-18 10:33:17.778177 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:33:17.778182 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 10:33:17.778187 | orchestrator | } 2025-09-18 10:33:17.778191 | orchestrator | 2025-09-18 10:33:17.778195 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 10:33:17.778199 | orchestrator | Thursday 18 September 2025 10:33:11 +0000 (0:00:00.178) 0:00:19.258 **** 2025-09-18 10:33:17.778203 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:17.778207 | orchestrator | 2025-09-18 10:33:17.778211 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 10:33:17.778216 | orchestrator | Thursday 18 September 2025 10:33:12 +0000 (0:00:00.744) 0:00:20.002 **** 2025-09-18 10:33:17.778219 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:17.778223 | orchestrator | 2025-09-18 10:33:17.778227 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 10:33:17.778231 | orchestrator | Thursday 18 September 2025 10:33:13 +0000 (0:00:00.527) 0:00:20.529 **** 2025-09-18 10:33:17.778235 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:17.778239 | orchestrator | 2025-09-18 10:33:17.778243 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 10:33:17.778247 | orchestrator | Thursday 18 September 2025 10:33:13 +0000 (0:00:00.533) 0:00:21.062 **** 2025-09-18 10:33:17.778251 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:17.778254 | orchestrator | 2025-09-18 10:33:17.778258 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 10:33:17.778262 | orchestrator | Thursday 18 September 2025 10:33:13 +0000 (0:00:00.166) 0:00:21.229 **** 2025-09-18 10:33:17.778266 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778270 | orchestrator | 2025-09-18 10:33:17.778274 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 10:33:17.778278 | orchestrator | Thursday 18 September 2025 10:33:13 +0000 (0:00:00.113) 0:00:21.342 **** 2025-09-18 10:33:17.778281 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778285 | orchestrator | 2025-09-18 10:33:17.778289 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 10:33:17.778293 | orchestrator | Thursday 18 September 2025 10:33:14 +0000 (0:00:00.140) 0:00:21.483 **** 2025-09-18 10:33:17.778313 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:33:17.778317 | orchestrator |  "vgs_report": { 2025-09-18 10:33:17.778333 | orchestrator |  "vg": [] 2025-09-18 10:33:17.778337 | orchestrator |  } 2025-09-18 10:33:17.778341 | orchestrator | } 2025-09-18 10:33:17.778345 | orchestrator | 2025-09-18 10:33:17.778349 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 10:33:17.778353 | orchestrator | Thursday 18 September 2025 10:33:14 +0000 (0:00:00.152) 0:00:21.636 **** 2025-09-18 10:33:17.778357 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778361 | orchestrator | 2025-09-18 10:33:17.778364 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 10:33:17.778368 | orchestrator | Thursday 18 September 2025 10:33:14 +0000 (0:00:00.140) 0:00:21.776 **** 2025-09-18 10:33:17.778372 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778376 | orchestrator | 2025-09-18 10:33:17.778380 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 10:33:17.778384 | orchestrator | Thursday 18 September 2025 10:33:14 +0000 (0:00:00.143) 0:00:21.919 **** 2025-09-18 10:33:17.778387 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778391 | orchestrator | 2025-09-18 10:33:17.778395 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 10:33:17.778399 | orchestrator | Thursday 18 September 2025 10:33:14 +0000 (0:00:00.370) 0:00:22.289 **** 2025-09-18 10:33:17.778403 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778407 | orchestrator | 2025-09-18 10:33:17.778410 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 10:33:17.778414 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.144) 0:00:22.434 **** 2025-09-18 10:33:17.778418 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778422 | orchestrator | 2025-09-18 10:33:17.778426 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 10:33:17.778430 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.153) 0:00:22.587 **** 2025-09-18 10:33:17.778434 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778437 | orchestrator | 2025-09-18 10:33:17.778441 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 10:33:17.778445 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.134) 0:00:22.722 **** 2025-09-18 10:33:17.778449 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778453 | orchestrator | 2025-09-18 10:33:17.778457 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 10:33:17.778460 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.146) 0:00:22.868 **** 2025-09-18 10:33:17.778464 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778468 | orchestrator | 2025-09-18 10:33:17.778472 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 10:33:17.778487 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.140) 0:00:23.009 **** 2025-09-18 10:33:17.778492 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778495 | orchestrator | 2025-09-18 10:33:17.778499 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 10:33:17.778503 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.148) 0:00:23.158 **** 2025-09-18 10:33:17.778507 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778511 | orchestrator | 2025-09-18 10:33:17.778514 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 10:33:17.778518 | orchestrator | Thursday 18 September 2025 10:33:15 +0000 (0:00:00.143) 0:00:23.301 **** 2025-09-18 10:33:17.778522 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778527 | orchestrator | 2025-09-18 10:33:17.778531 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 10:33:17.778535 | orchestrator | Thursday 18 September 2025 10:33:16 +0000 (0:00:00.164) 0:00:23.465 **** 2025-09-18 10:33:17.778539 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778544 | orchestrator | 2025-09-18 10:33:17.778556 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 10:33:17.778560 | orchestrator | Thursday 18 September 2025 10:33:16 +0000 (0:00:00.146) 0:00:23.611 **** 2025-09-18 10:33:17.778564 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778569 | orchestrator | 2025-09-18 10:33:17.778573 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 10:33:17.778577 | orchestrator | Thursday 18 September 2025 10:33:16 +0000 (0:00:00.149) 0:00:23.761 **** 2025-09-18 10:33:17.778581 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778586 | orchestrator | 2025-09-18 10:33:17.778590 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 10:33:17.778594 | orchestrator | Thursday 18 September 2025 10:33:16 +0000 (0:00:00.137) 0:00:23.899 **** 2025-09-18 10:33:17.778599 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778606 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:17.778610 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778614 | orchestrator | 2025-09-18 10:33:17.778618 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 10:33:17.778623 | orchestrator | Thursday 18 September 2025 10:33:16 +0000 (0:00:00.405) 0:00:24.304 **** 2025-09-18 10:33:17.778627 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778631 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:17.778636 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778640 | orchestrator | 2025-09-18 10:33:17.778644 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 10:33:17.778648 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.172) 0:00:24.476 **** 2025-09-18 10:33:17.778653 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778658 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:17.778662 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778666 | orchestrator | 2025-09-18 10:33:17.778670 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 10:33:17.778675 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.146) 0:00:24.623 **** 2025-09-18 10:33:17.778679 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778684 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:17.778688 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778692 | orchestrator | 2025-09-18 10:33:17.778696 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 10:33:17.778700 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.169) 0:00:24.793 **** 2025-09-18 10:33:17.778705 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778709 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:17.778713 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:17.778721 | orchestrator | 2025-09-18 10:33:17.778725 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 10:33:17.778730 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.201) 0:00:24.994 **** 2025-09-18 10:33:17.778734 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:17.778741 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.647763 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.647875 | orchestrator | 2025-09-18 10:33:23.647892 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 10:33:23.647906 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.173) 0:00:25.168 **** 2025-09-18 10:33:23.647936 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:23.647950 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.647962 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.647973 | orchestrator | 2025-09-18 10:33:23.647984 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 10:33:23.647996 | orchestrator | Thursday 18 September 2025 10:33:17 +0000 (0:00:00.170) 0:00:25.339 **** 2025-09-18 10:33:23.648007 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:23.648018 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.648030 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.648041 | orchestrator | 2025-09-18 10:33:23.648052 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 10:33:23.648064 | orchestrator | Thursday 18 September 2025 10:33:18 +0000 (0:00:00.165) 0:00:25.504 **** 2025-09-18 10:33:23.648075 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:23.648087 | orchestrator | 2025-09-18 10:33:23.648098 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 10:33:23.648109 | orchestrator | Thursday 18 September 2025 10:33:18 +0000 (0:00:00.542) 0:00:26.046 **** 2025-09-18 10:33:23.648120 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:23.648131 | orchestrator | 2025-09-18 10:33:23.648142 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 10:33:23.648183 | orchestrator | Thursday 18 September 2025 10:33:19 +0000 (0:00:00.521) 0:00:26.568 **** 2025-09-18 10:33:23.648194 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:33:23.648205 | orchestrator | 2025-09-18 10:33:23.648216 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 10:33:23.648228 | orchestrator | Thursday 18 September 2025 10:33:19 +0000 (0:00:00.161) 0:00:26.730 **** 2025-09-18 10:33:23.648241 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'vg_name': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'}) 2025-09-18 10:33:23.648256 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'vg_name': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}) 2025-09-18 10:33:23.648269 | orchestrator | 2025-09-18 10:33:23.648290 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 10:33:23.648303 | orchestrator | Thursday 18 September 2025 10:33:19 +0000 (0:00:00.226) 0:00:26.956 **** 2025-09-18 10:33:23.648315 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:23.648348 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.648361 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.648373 | orchestrator | 2025-09-18 10:33:23.648386 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 10:33:23.648398 | orchestrator | Thursday 18 September 2025 10:33:19 +0000 (0:00:00.419) 0:00:27.376 **** 2025-09-18 10:33:23.648411 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:23.648423 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.648435 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.648447 | orchestrator | 2025-09-18 10:33:23.648459 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 10:33:23.648472 | orchestrator | Thursday 18 September 2025 10:33:20 +0000 (0:00:00.164) 0:00:27.540 **** 2025-09-18 10:33:23.648485 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'})  2025-09-18 10:33:23.648498 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'})  2025-09-18 10:33:23.648510 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:33:23.648522 | orchestrator | 2025-09-18 10:33:23.648535 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 10:33:23.648547 | orchestrator | Thursday 18 September 2025 10:33:20 +0000 (0:00:00.172) 0:00:27.712 **** 2025-09-18 10:33:23.648559 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:33:23.648572 | orchestrator |  "lvm_report": { 2025-09-18 10:33:23.648585 | orchestrator |  "lv": [ 2025-09-18 10:33:23.648598 | orchestrator |  { 2025-09-18 10:33:23.648628 | orchestrator |  "lv_name": "osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12", 2025-09-18 10:33:23.648640 | orchestrator |  "vg_name": "ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12" 2025-09-18 10:33:23.648651 | orchestrator |  }, 2025-09-18 10:33:23.648662 | orchestrator |  { 2025-09-18 10:33:23.648673 | orchestrator |  "lv_name": "osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f", 2025-09-18 10:33:23.648684 | orchestrator |  "vg_name": "ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f" 2025-09-18 10:33:23.648695 | orchestrator |  } 2025-09-18 10:33:23.648706 | orchestrator |  ], 2025-09-18 10:33:23.648717 | orchestrator |  "pv": [ 2025-09-18 10:33:23.648728 | orchestrator |  { 2025-09-18 10:33:23.648739 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 10:33:23.648750 | orchestrator |  "vg_name": "ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12" 2025-09-18 10:33:23.648761 | orchestrator |  }, 2025-09-18 10:33:23.648771 | orchestrator |  { 2025-09-18 10:33:23.648798 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 10:33:23.648810 | orchestrator |  "vg_name": "ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f" 2025-09-18 10:33:23.648821 | orchestrator |  } 2025-09-18 10:33:23.648832 | orchestrator |  ] 2025-09-18 10:33:23.648843 | orchestrator |  } 2025-09-18 10:33:23.648854 | orchestrator | } 2025-09-18 10:33:23.648865 | orchestrator | 2025-09-18 10:33:23.648876 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 10:33:23.648887 | orchestrator | 2025-09-18 10:33:23.648898 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:33:23.648909 | orchestrator | Thursday 18 September 2025 10:33:20 +0000 (0:00:00.294) 0:00:28.006 **** 2025-09-18 10:33:23.648920 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-09-18 10:33:23.648939 | orchestrator | 2025-09-18 10:33:23.648950 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:33:23.648961 | orchestrator | Thursday 18 September 2025 10:33:20 +0000 (0:00:00.288) 0:00:28.295 **** 2025-09-18 10:33:23.648972 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:23.648983 | orchestrator | 2025-09-18 10:33:23.648994 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649005 | orchestrator | Thursday 18 September 2025 10:33:21 +0000 (0:00:00.244) 0:00:28.540 **** 2025-09-18 10:33:23.649016 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-09-18 10:33:23.649027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-09-18 10:33:23.649038 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-09-18 10:33:23.649048 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-09-18 10:33:23.649059 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-09-18 10:33:23.649070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-09-18 10:33:23.649081 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-09-18 10:33:23.649097 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-09-18 10:33:23.649108 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-09-18 10:33:23.649119 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-09-18 10:33:23.649130 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-09-18 10:33:23.649141 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-09-18 10:33:23.649170 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-09-18 10:33:23.649181 | orchestrator | 2025-09-18 10:33:23.649192 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649203 | orchestrator | Thursday 18 September 2025 10:33:21 +0000 (0:00:00.454) 0:00:28.994 **** 2025-09-18 10:33:23.649214 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649225 | orchestrator | 2025-09-18 10:33:23.649236 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649247 | orchestrator | Thursday 18 September 2025 10:33:21 +0000 (0:00:00.229) 0:00:29.224 **** 2025-09-18 10:33:23.649258 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649269 | orchestrator | 2025-09-18 10:33:23.649280 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649291 | orchestrator | Thursday 18 September 2025 10:33:22 +0000 (0:00:00.220) 0:00:29.444 **** 2025-09-18 10:33:23.649302 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649313 | orchestrator | 2025-09-18 10:33:23.649324 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649334 | orchestrator | Thursday 18 September 2025 10:33:22 +0000 (0:00:00.740) 0:00:30.184 **** 2025-09-18 10:33:23.649345 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649356 | orchestrator | 2025-09-18 10:33:23.649367 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649378 | orchestrator | Thursday 18 September 2025 10:33:22 +0000 (0:00:00.206) 0:00:30.390 **** 2025-09-18 10:33:23.649389 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649400 | orchestrator | 2025-09-18 10:33:23.649410 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649421 | orchestrator | Thursday 18 September 2025 10:33:23 +0000 (0:00:00.207) 0:00:30.598 **** 2025-09-18 10:33:23.649432 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649443 | orchestrator | 2025-09-18 10:33:23.649461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:23.649472 | orchestrator | Thursday 18 September 2025 10:33:23 +0000 (0:00:00.216) 0:00:30.815 **** 2025-09-18 10:33:23.649483 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:23.649494 | orchestrator | 2025-09-18 10:33:23.649512 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515474 | orchestrator | Thursday 18 September 2025 10:33:23 +0000 (0:00:00.222) 0:00:31.037 **** 2025-09-18 10:33:34.515592 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.515609 | orchestrator | 2025-09-18 10:33:34.515621 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515633 | orchestrator | Thursday 18 September 2025 10:33:23 +0000 (0:00:00.210) 0:00:31.248 **** 2025-09-18 10:33:34.515645 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177) 2025-09-18 10:33:34.515658 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177) 2025-09-18 10:33:34.515669 | orchestrator | 2025-09-18 10:33:34.515680 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515691 | orchestrator | Thursday 18 September 2025 10:33:24 +0000 (0:00:00.437) 0:00:31.685 **** 2025-09-18 10:33:34.515703 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70) 2025-09-18 10:33:34.515714 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70) 2025-09-18 10:33:34.515725 | orchestrator | 2025-09-18 10:33:34.515736 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515747 | orchestrator | Thursday 18 September 2025 10:33:24 +0000 (0:00:00.427) 0:00:32.112 **** 2025-09-18 10:33:34.515758 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940) 2025-09-18 10:33:34.515770 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940) 2025-09-18 10:33:34.515781 | orchestrator | 2025-09-18 10:33:34.515792 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515803 | orchestrator | Thursday 18 September 2025 10:33:25 +0000 (0:00:00.499) 0:00:32.612 **** 2025-09-18 10:33:34.515814 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4) 2025-09-18 10:33:34.515825 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4) 2025-09-18 10:33:34.515837 | orchestrator | 2025-09-18 10:33:34.515848 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:34.515859 | orchestrator | Thursday 18 September 2025 10:33:25 +0000 (0:00:00.464) 0:00:33.076 **** 2025-09-18 10:33:34.515870 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:33:34.515881 | orchestrator | 2025-09-18 10:33:34.515892 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.515917 | orchestrator | Thursday 18 September 2025 10:33:26 +0000 (0:00:00.392) 0:00:33.469 **** 2025-09-18 10:33:34.515928 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-09-18 10:33:34.515941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-09-18 10:33:34.515952 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-09-18 10:33:34.515963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-09-18 10:33:34.515974 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-09-18 10:33:34.515986 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-09-18 10:33:34.515998 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-09-18 10:33:34.516034 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-09-18 10:33:34.516047 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-09-18 10:33:34.516059 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-09-18 10:33:34.516071 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-09-18 10:33:34.516083 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-09-18 10:33:34.516095 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-09-18 10:33:34.516107 | orchestrator | 2025-09-18 10:33:34.516161 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516176 | orchestrator | Thursday 18 September 2025 10:33:26 +0000 (0:00:00.690) 0:00:34.159 **** 2025-09-18 10:33:34.516188 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516200 | orchestrator | 2025-09-18 10:33:34.516212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516224 | orchestrator | Thursday 18 September 2025 10:33:26 +0000 (0:00:00.204) 0:00:34.364 **** 2025-09-18 10:33:34.516236 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516249 | orchestrator | 2025-09-18 10:33:34.516260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516272 | orchestrator | Thursday 18 September 2025 10:33:27 +0000 (0:00:00.218) 0:00:34.582 **** 2025-09-18 10:33:34.516284 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516296 | orchestrator | 2025-09-18 10:33:34.516308 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516320 | orchestrator | Thursday 18 September 2025 10:33:27 +0000 (0:00:00.211) 0:00:34.794 **** 2025-09-18 10:33:34.516332 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516345 | orchestrator | 2025-09-18 10:33:34.516375 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516386 | orchestrator | Thursday 18 September 2025 10:33:27 +0000 (0:00:00.221) 0:00:35.015 **** 2025-09-18 10:33:34.516397 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516408 | orchestrator | 2025-09-18 10:33:34.516419 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516430 | orchestrator | Thursday 18 September 2025 10:33:27 +0000 (0:00:00.241) 0:00:35.257 **** 2025-09-18 10:33:34.516441 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516452 | orchestrator | 2025-09-18 10:33:34.516463 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516474 | orchestrator | Thursday 18 September 2025 10:33:28 +0000 (0:00:00.211) 0:00:35.468 **** 2025-09-18 10:33:34.516485 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516496 | orchestrator | 2025-09-18 10:33:34.516507 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516518 | orchestrator | Thursday 18 September 2025 10:33:28 +0000 (0:00:00.202) 0:00:35.671 **** 2025-09-18 10:33:34.516528 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516539 | orchestrator | 2025-09-18 10:33:34.516550 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516561 | orchestrator | Thursday 18 September 2025 10:33:28 +0000 (0:00:00.215) 0:00:35.886 **** 2025-09-18 10:33:34.516572 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-09-18 10:33:34.516583 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-09-18 10:33:34.516594 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-09-18 10:33:34.516605 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-09-18 10:33:34.516616 | orchestrator | 2025-09-18 10:33:34.516628 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516639 | orchestrator | Thursday 18 September 2025 10:33:29 +0000 (0:00:00.939) 0:00:36.826 **** 2025-09-18 10:33:34.516659 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516670 | orchestrator | 2025-09-18 10:33:34.516681 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516692 | orchestrator | Thursday 18 September 2025 10:33:29 +0000 (0:00:00.202) 0:00:37.029 **** 2025-09-18 10:33:34.516703 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516713 | orchestrator | 2025-09-18 10:33:34.516724 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516735 | orchestrator | Thursday 18 September 2025 10:33:29 +0000 (0:00:00.204) 0:00:37.233 **** 2025-09-18 10:33:34.516746 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516757 | orchestrator | 2025-09-18 10:33:34.516768 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:34.516779 | orchestrator | Thursday 18 September 2025 10:33:30 +0000 (0:00:00.730) 0:00:37.964 **** 2025-09-18 10:33:34.516790 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516801 | orchestrator | 2025-09-18 10:33:34.516811 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 10:33:34.516822 | orchestrator | Thursday 18 September 2025 10:33:30 +0000 (0:00:00.211) 0:00:38.175 **** 2025-09-18 10:33:34.516839 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.516850 | orchestrator | 2025-09-18 10:33:34.516861 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 10:33:34.516872 | orchestrator | Thursday 18 September 2025 10:33:30 +0000 (0:00:00.137) 0:00:38.313 **** 2025-09-18 10:33:34.516883 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}}) 2025-09-18 10:33:34.516895 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '7a586834-03f6-5ee9-b58c-2d4644436c0e'}}) 2025-09-18 10:33:34.516906 | orchestrator | 2025-09-18 10:33:34.516917 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 10:33:34.516928 | orchestrator | Thursday 18 September 2025 10:33:31 +0000 (0:00:00.203) 0:00:38.517 **** 2025-09-18 10:33:34.516940 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}) 2025-09-18 10:33:34.516952 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'}) 2025-09-18 10:33:34.516963 | orchestrator | 2025-09-18 10:33:34.516974 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 10:33:34.516985 | orchestrator | Thursday 18 September 2025 10:33:33 +0000 (0:00:01.890) 0:00:40.408 **** 2025-09-18 10:33:34.516996 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:34.517009 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:34.517019 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:34.517030 | orchestrator | 2025-09-18 10:33:34.517041 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 10:33:34.517052 | orchestrator | Thursday 18 September 2025 10:33:33 +0000 (0:00:00.173) 0:00:40.582 **** 2025-09-18 10:33:34.517063 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}) 2025-09-18 10:33:34.517074 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'}) 2025-09-18 10:33:34.517085 | orchestrator | 2025-09-18 10:33:34.517102 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 10:33:40.308850 | orchestrator | Thursday 18 September 2025 10:33:34 +0000 (0:00:01.318) 0:00:41.900 **** 2025-09-18 10:33:40.308963 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.308981 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.308993 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309005 | orchestrator | 2025-09-18 10:33:40.309017 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 10:33:40.309028 | orchestrator | Thursday 18 September 2025 10:33:34 +0000 (0:00:00.161) 0:00:42.062 **** 2025-09-18 10:33:40.309039 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309050 | orchestrator | 2025-09-18 10:33:40.309062 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 10:33:40.309073 | orchestrator | Thursday 18 September 2025 10:33:34 +0000 (0:00:00.147) 0:00:42.209 **** 2025-09-18 10:33:40.309084 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309095 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309106 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309117 | orchestrator | 2025-09-18 10:33:40.309198 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 10:33:40.309212 | orchestrator | Thursday 18 September 2025 10:33:34 +0000 (0:00:00.172) 0:00:42.382 **** 2025-09-18 10:33:40.309224 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309235 | orchestrator | 2025-09-18 10:33:40.309246 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 10:33:40.309257 | orchestrator | Thursday 18 September 2025 10:33:35 +0000 (0:00:00.145) 0:00:42.527 **** 2025-09-18 10:33:40.309268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309279 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309290 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309301 | orchestrator | 2025-09-18 10:33:40.309312 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 10:33:40.309323 | orchestrator | Thursday 18 September 2025 10:33:35 +0000 (0:00:00.174) 0:00:42.702 **** 2025-09-18 10:33:40.309347 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309358 | orchestrator | 2025-09-18 10:33:40.309369 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 10:33:40.309380 | orchestrator | Thursday 18 September 2025 10:33:35 +0000 (0:00:00.368) 0:00:43.071 **** 2025-09-18 10:33:40.309391 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309402 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309413 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309423 | orchestrator | 2025-09-18 10:33:40.309434 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 10:33:40.309445 | orchestrator | Thursday 18 September 2025 10:33:35 +0000 (0:00:00.188) 0:00:43.260 **** 2025-09-18 10:33:40.309456 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:40.309468 | orchestrator | 2025-09-18 10:33:40.309479 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 10:33:40.309490 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.139) 0:00:43.399 **** 2025-09-18 10:33:40.309512 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309523 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309535 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309546 | orchestrator | 2025-09-18 10:33:40.309557 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 10:33:40.309568 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.168) 0:00:43.568 **** 2025-09-18 10:33:40.309579 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309590 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309601 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309611 | orchestrator | 2025-09-18 10:33:40.309622 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 10:33:40.309633 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.165) 0:00:43.734 **** 2025-09-18 10:33:40.309661 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:40.309673 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:40.309684 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309695 | orchestrator | 2025-09-18 10:33:40.309706 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 10:33:40.309718 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.165) 0:00:43.899 **** 2025-09-18 10:33:40.309729 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309740 | orchestrator | 2025-09-18 10:33:40.309750 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 10:33:40.309761 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.142) 0:00:44.042 **** 2025-09-18 10:33:40.309773 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309784 | orchestrator | 2025-09-18 10:33:40.309795 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 10:33:40.309805 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.142) 0:00:44.184 **** 2025-09-18 10:33:40.309816 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.309827 | orchestrator | 2025-09-18 10:33:40.309838 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 10:33:40.309849 | orchestrator | Thursday 18 September 2025 10:33:36 +0000 (0:00:00.125) 0:00:44.309 **** 2025-09-18 10:33:40.309860 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:33:40.309872 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 10:33:40.309883 | orchestrator | } 2025-09-18 10:33:40.309895 | orchestrator | 2025-09-18 10:33:40.309905 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 10:33:40.309916 | orchestrator | Thursday 18 September 2025 10:33:37 +0000 (0:00:00.149) 0:00:44.459 **** 2025-09-18 10:33:40.309928 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:33:40.309939 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 10:33:40.309950 | orchestrator | } 2025-09-18 10:33:40.309961 | orchestrator | 2025-09-18 10:33:40.309971 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 10:33:40.309983 | orchestrator | Thursday 18 September 2025 10:33:37 +0000 (0:00:00.150) 0:00:44.609 **** 2025-09-18 10:33:40.309994 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:33:40.310005 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 10:33:40.310078 | orchestrator | } 2025-09-18 10:33:40.310092 | orchestrator | 2025-09-18 10:33:40.310103 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 10:33:40.310115 | orchestrator | Thursday 18 September 2025 10:33:37 +0000 (0:00:00.153) 0:00:44.762 **** 2025-09-18 10:33:40.310126 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:40.310158 | orchestrator | 2025-09-18 10:33:40.310169 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 10:33:40.310180 | orchestrator | Thursday 18 September 2025 10:33:38 +0000 (0:00:00.745) 0:00:45.507 **** 2025-09-18 10:33:40.310191 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:40.310202 | orchestrator | 2025-09-18 10:33:40.310213 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 10:33:40.310225 | orchestrator | Thursday 18 September 2025 10:33:38 +0000 (0:00:00.551) 0:00:46.059 **** 2025-09-18 10:33:40.310236 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:40.310247 | orchestrator | 2025-09-18 10:33:40.310257 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 10:33:40.310268 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.501) 0:00:46.560 **** 2025-09-18 10:33:40.310279 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:40.310290 | orchestrator | 2025-09-18 10:33:40.310301 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 10:33:40.310312 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.168) 0:00:46.729 **** 2025-09-18 10:33:40.310323 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310334 | orchestrator | 2025-09-18 10:33:40.310345 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 10:33:40.310356 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.120) 0:00:46.850 **** 2025-09-18 10:33:40.310367 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310378 | orchestrator | 2025-09-18 10:33:40.310388 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 10:33:40.310399 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.122) 0:00:46.972 **** 2025-09-18 10:33:40.310410 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:33:40.310422 | orchestrator |  "vgs_report": { 2025-09-18 10:33:40.310433 | orchestrator |  "vg": [] 2025-09-18 10:33:40.310444 | orchestrator |  } 2025-09-18 10:33:40.310455 | orchestrator | } 2025-09-18 10:33:40.310465 | orchestrator | 2025-09-18 10:33:40.310476 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 10:33:40.310487 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.144) 0:00:47.117 **** 2025-09-18 10:33:40.310498 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310509 | orchestrator | 2025-09-18 10:33:40.310520 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 10:33:40.310531 | orchestrator | Thursday 18 September 2025 10:33:39 +0000 (0:00:00.133) 0:00:47.251 **** 2025-09-18 10:33:40.310542 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310553 | orchestrator | 2025-09-18 10:33:40.310581 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 10:33:40.310593 | orchestrator | Thursday 18 September 2025 10:33:40 +0000 (0:00:00.157) 0:00:47.408 **** 2025-09-18 10:33:40.310604 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310615 | orchestrator | 2025-09-18 10:33:40.310626 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 10:33:40.310637 | orchestrator | Thursday 18 September 2025 10:33:40 +0000 (0:00:00.142) 0:00:47.550 **** 2025-09-18 10:33:40.310648 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:40.310659 | orchestrator | 2025-09-18 10:33:40.310670 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 10:33:40.310690 | orchestrator | Thursday 18 September 2025 10:33:40 +0000 (0:00:00.144) 0:00:47.695 **** 2025-09-18 10:33:45.377734 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.377868 | orchestrator | 2025-09-18 10:33:45.377920 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 10:33:45.377935 | orchestrator | Thursday 18 September 2025 10:33:40 +0000 (0:00:00.164) 0:00:47.859 **** 2025-09-18 10:33:45.377946 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.377957 | orchestrator | 2025-09-18 10:33:45.377969 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 10:33:45.377981 | orchestrator | Thursday 18 September 2025 10:33:40 +0000 (0:00:00.411) 0:00:48.271 **** 2025-09-18 10:33:45.377992 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378003 | orchestrator | 2025-09-18 10:33:45.378014 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 10:33:45.378088 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.144) 0:00:48.416 **** 2025-09-18 10:33:45.378099 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378110 | orchestrator | 2025-09-18 10:33:45.378157 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 10:33:45.378171 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.158) 0:00:48.575 **** 2025-09-18 10:33:45.378182 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378193 | orchestrator | 2025-09-18 10:33:45.378204 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 10:33:45.378215 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.145) 0:00:48.720 **** 2025-09-18 10:33:45.378226 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378237 | orchestrator | 2025-09-18 10:33:45.378248 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 10:33:45.378261 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.149) 0:00:48.869 **** 2025-09-18 10:33:45.378274 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378285 | orchestrator | 2025-09-18 10:33:45.378297 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 10:33:45.378310 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.145) 0:00:49.014 **** 2025-09-18 10:33:45.378322 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378335 | orchestrator | 2025-09-18 10:33:45.378346 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 10:33:45.378356 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.153) 0:00:49.168 **** 2025-09-18 10:33:45.378367 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378378 | orchestrator | 2025-09-18 10:33:45.378389 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 10:33:45.378400 | orchestrator | Thursday 18 September 2025 10:33:41 +0000 (0:00:00.125) 0:00:49.294 **** 2025-09-18 10:33:45.378411 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378422 | orchestrator | 2025-09-18 10:33:45.378433 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 10:33:45.378444 | orchestrator | Thursday 18 September 2025 10:33:42 +0000 (0:00:00.135) 0:00:49.430 **** 2025-09-18 10:33:45.378471 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378485 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378496 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378507 | orchestrator | 2025-09-18 10:33:45.378519 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 10:33:45.378530 | orchestrator | Thursday 18 September 2025 10:33:42 +0000 (0:00:00.173) 0:00:49.604 **** 2025-09-18 10:33:45.378541 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378552 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378576 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378587 | orchestrator | 2025-09-18 10:33:45.378598 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 10:33:45.378610 | orchestrator | Thursday 18 September 2025 10:33:42 +0000 (0:00:00.160) 0:00:49.764 **** 2025-09-18 10:33:45.378621 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378633 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378644 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378655 | orchestrator | 2025-09-18 10:33:45.378666 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 10:33:45.378677 | orchestrator | Thursday 18 September 2025 10:33:42 +0000 (0:00:00.158) 0:00:49.923 **** 2025-09-18 10:33:45.378688 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378699 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378710 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378721 | orchestrator | 2025-09-18 10:33:45.378732 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 10:33:45.378762 | orchestrator | Thursday 18 September 2025 10:33:42 +0000 (0:00:00.368) 0:00:50.291 **** 2025-09-18 10:33:45.378774 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378785 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378796 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378807 | orchestrator | 2025-09-18 10:33:45.378818 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 10:33:45.378829 | orchestrator | Thursday 18 September 2025 10:33:43 +0000 (0:00:00.174) 0:00:50.466 **** 2025-09-18 10:33:45.378840 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378852 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378863 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378874 | orchestrator | 2025-09-18 10:33:45.378886 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 10:33:45.378897 | orchestrator | Thursday 18 September 2025 10:33:43 +0000 (0:00:00.164) 0:00:50.630 **** 2025-09-18 10:33:45.378908 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.378930 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.378941 | orchestrator | 2025-09-18 10:33:45.378952 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 10:33:45.378963 | orchestrator | Thursday 18 September 2025 10:33:43 +0000 (0:00:00.189) 0:00:50.820 **** 2025-09-18 10:33:45.378974 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.378992 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.379003 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.379014 | orchestrator | 2025-09-18 10:33:45.379031 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 10:33:45.379043 | orchestrator | Thursday 18 September 2025 10:33:43 +0000 (0:00:00.165) 0:00:50.986 **** 2025-09-18 10:33:45.379054 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:45.379065 | orchestrator | 2025-09-18 10:33:45.379076 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 10:33:45.379087 | orchestrator | Thursday 18 September 2025 10:33:44 +0000 (0:00:00.567) 0:00:51.553 **** 2025-09-18 10:33:45.379098 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:45.379109 | orchestrator | 2025-09-18 10:33:45.379120 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 10:33:45.379148 | orchestrator | Thursday 18 September 2025 10:33:44 +0000 (0:00:00.521) 0:00:52.074 **** 2025-09-18 10:33:45.379159 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:33:45.379170 | orchestrator | 2025-09-18 10:33:45.379181 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 10:33:45.379192 | orchestrator | Thursday 18 September 2025 10:33:44 +0000 (0:00:00.167) 0:00:52.242 **** 2025-09-18 10:33:45.379203 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'vg_name': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'}) 2025-09-18 10:33:45.379215 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'vg_name': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}) 2025-09-18 10:33:45.379227 | orchestrator | 2025-09-18 10:33:45.379238 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 10:33:45.379248 | orchestrator | Thursday 18 September 2025 10:33:45 +0000 (0:00:00.174) 0:00:52.417 **** 2025-09-18 10:33:45.379260 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.379271 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.379282 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:45.379293 | orchestrator | 2025-09-18 10:33:45.379304 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 10:33:45.379315 | orchestrator | Thursday 18 September 2025 10:33:45 +0000 (0:00:00.176) 0:00:52.593 **** 2025-09-18 10:33:45.379326 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:45.379337 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:45.379356 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:52.362572 | orchestrator | 2025-09-18 10:33:52.362685 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 10:33:52.362703 | orchestrator | Thursday 18 September 2025 10:33:45 +0000 (0:00:00.171) 0:00:52.765 **** 2025-09-18 10:33:52.362716 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'})  2025-09-18 10:33:52.362730 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'})  2025-09-18 10:33:52.362741 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:33:52.362754 | orchestrator | 2025-09-18 10:33:52.362766 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 10:33:52.362777 | orchestrator | Thursday 18 September 2025 10:33:45 +0000 (0:00:00.167) 0:00:52.932 **** 2025-09-18 10:33:52.362815 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:33:52.362827 | orchestrator |  "lvm_report": { 2025-09-18 10:33:52.362839 | orchestrator |  "lv": [ 2025-09-18 10:33:52.362850 | orchestrator |  { 2025-09-18 10:33:52.362862 | orchestrator |  "lv_name": "osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e", 2025-09-18 10:33:52.362874 | orchestrator |  "vg_name": "ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e" 2025-09-18 10:33:52.362885 | orchestrator |  }, 2025-09-18 10:33:52.362896 | orchestrator |  { 2025-09-18 10:33:52.362907 | orchestrator |  "lv_name": "osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7", 2025-09-18 10:33:52.362918 | orchestrator |  "vg_name": "ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7" 2025-09-18 10:33:52.362929 | orchestrator |  } 2025-09-18 10:33:52.362940 | orchestrator |  ], 2025-09-18 10:33:52.362951 | orchestrator |  "pv": [ 2025-09-18 10:33:52.362962 | orchestrator |  { 2025-09-18 10:33:52.362973 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 10:33:52.362984 | orchestrator |  "vg_name": "ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7" 2025-09-18 10:33:52.362995 | orchestrator |  }, 2025-09-18 10:33:52.363006 | orchestrator |  { 2025-09-18 10:33:52.363017 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 10:33:52.363029 | orchestrator |  "vg_name": "ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e" 2025-09-18 10:33:52.363040 | orchestrator |  } 2025-09-18 10:33:52.363051 | orchestrator |  ] 2025-09-18 10:33:52.363062 | orchestrator |  } 2025-09-18 10:33:52.363073 | orchestrator | } 2025-09-18 10:33:52.363084 | orchestrator | 2025-09-18 10:33:52.363096 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-09-18 10:33:52.363109 | orchestrator | 2025-09-18 10:33:52.363151 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-09-18 10:33:52.363164 | orchestrator | Thursday 18 September 2025 10:33:46 +0000 (0:00:00.593) 0:00:53.526 **** 2025-09-18 10:33:52.363176 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-09-18 10:33:52.363189 | orchestrator | 2025-09-18 10:33:52.363202 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-09-18 10:33:52.363214 | orchestrator | Thursday 18 September 2025 10:33:46 +0000 (0:00:00.274) 0:00:53.801 **** 2025-09-18 10:33:52.363227 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:33:52.363240 | orchestrator | 2025-09-18 10:33:52.363253 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363265 | orchestrator | Thursday 18 September 2025 10:33:46 +0000 (0:00:00.240) 0:00:54.042 **** 2025-09-18 10:33:52.363278 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-09-18 10:33:52.363290 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-09-18 10:33:52.363303 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-09-18 10:33:52.363314 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-09-18 10:33:52.363326 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-09-18 10:33:52.363337 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-09-18 10:33:52.363347 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-09-18 10:33:52.363358 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-09-18 10:33:52.363369 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-09-18 10:33:52.363380 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-09-18 10:33:52.363391 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-09-18 10:33:52.363418 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-09-18 10:33:52.363430 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-09-18 10:33:52.363440 | orchestrator | 2025-09-18 10:33:52.363451 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363462 | orchestrator | Thursday 18 September 2025 10:33:47 +0000 (0:00:00.480) 0:00:54.522 **** 2025-09-18 10:33:52.363473 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363488 | orchestrator | 2025-09-18 10:33:52.363500 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363511 | orchestrator | Thursday 18 September 2025 10:33:47 +0000 (0:00:00.229) 0:00:54.752 **** 2025-09-18 10:33:52.363522 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363532 | orchestrator | 2025-09-18 10:33:52.363543 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363575 | orchestrator | Thursday 18 September 2025 10:33:47 +0000 (0:00:00.251) 0:00:55.004 **** 2025-09-18 10:33:52.363587 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363598 | orchestrator | 2025-09-18 10:33:52.363609 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363619 | orchestrator | Thursday 18 September 2025 10:33:47 +0000 (0:00:00.235) 0:00:55.239 **** 2025-09-18 10:33:52.363630 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363641 | orchestrator | 2025-09-18 10:33:52.363652 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363663 | orchestrator | Thursday 18 September 2025 10:33:48 +0000 (0:00:00.225) 0:00:55.465 **** 2025-09-18 10:33:52.363674 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363685 | orchestrator | 2025-09-18 10:33:52.363695 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363706 | orchestrator | Thursday 18 September 2025 10:33:48 +0000 (0:00:00.224) 0:00:55.690 **** 2025-09-18 10:33:52.363717 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363728 | orchestrator | 2025-09-18 10:33:52.363739 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363750 | orchestrator | Thursday 18 September 2025 10:33:49 +0000 (0:00:00.733) 0:00:56.423 **** 2025-09-18 10:33:52.363760 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363771 | orchestrator | 2025-09-18 10:33:52.363782 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363793 | orchestrator | Thursday 18 September 2025 10:33:49 +0000 (0:00:00.225) 0:00:56.649 **** 2025-09-18 10:33:52.363804 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:33:52.363814 | orchestrator | 2025-09-18 10:33:52.363825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363836 | orchestrator | Thursday 18 September 2025 10:33:49 +0000 (0:00:00.273) 0:00:56.922 **** 2025-09-18 10:33:52.363847 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6) 2025-09-18 10:33:52.363906 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6) 2025-09-18 10:33:52.363919 | orchestrator | 2025-09-18 10:33:52.363930 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363941 | orchestrator | Thursday 18 September 2025 10:33:50 +0000 (0:00:00.556) 0:00:57.478 **** 2025-09-18 10:33:52.363952 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52) 2025-09-18 10:33:52.363963 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52) 2025-09-18 10:33:52.363974 | orchestrator | 2025-09-18 10:33:52.363984 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.363995 | orchestrator | Thursday 18 September 2025 10:33:50 +0000 (0:00:00.474) 0:00:57.953 **** 2025-09-18 10:33:52.364019 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836) 2025-09-18 10:33:52.364031 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836) 2025-09-18 10:33:52.364041 | orchestrator | 2025-09-18 10:33:52.364052 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.364063 | orchestrator | Thursday 18 September 2025 10:33:51 +0000 (0:00:00.477) 0:00:58.431 **** 2025-09-18 10:33:52.364074 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64) 2025-09-18 10:33:52.364085 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64) 2025-09-18 10:33:52.364095 | orchestrator | 2025-09-18 10:33:52.364106 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-09-18 10:33:52.364136 | orchestrator | Thursday 18 September 2025 10:33:51 +0000 (0:00:00.459) 0:00:58.890 **** 2025-09-18 10:33:52.364147 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-09-18 10:33:52.364158 | orchestrator | 2025-09-18 10:33:52.364169 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:33:52.364180 | orchestrator | Thursday 18 September 2025 10:33:51 +0000 (0:00:00.377) 0:00:59.268 **** 2025-09-18 10:33:52.364191 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-09-18 10:33:52.364201 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-09-18 10:33:52.364212 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-09-18 10:33:52.364223 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-09-18 10:33:52.364233 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-09-18 10:33:52.364244 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-09-18 10:33:52.364255 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-09-18 10:33:52.364266 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-09-18 10:33:52.364277 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-09-18 10:33:52.364287 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-09-18 10:33:52.364298 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-09-18 10:33:52.364317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-09-18 10:34:01.677999 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-09-18 10:34:01.678169 | orchestrator | 2025-09-18 10:34:01.678185 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678195 | orchestrator | Thursday 18 September 2025 10:33:52 +0000 (0:00:00.474) 0:00:59.743 **** 2025-09-18 10:34:01.678204 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678215 | orchestrator | 2025-09-18 10:34:01.678224 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678233 | orchestrator | Thursday 18 September 2025 10:33:52 +0000 (0:00:00.212) 0:00:59.956 **** 2025-09-18 10:34:01.678242 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678251 | orchestrator | 2025-09-18 10:34:01.678260 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678269 | orchestrator | Thursday 18 September 2025 10:33:52 +0000 (0:00:00.200) 0:01:00.157 **** 2025-09-18 10:34:01.678278 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678286 | orchestrator | 2025-09-18 10:34:01.678295 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678327 | orchestrator | Thursday 18 September 2025 10:33:53 +0000 (0:00:00.699) 0:01:00.856 **** 2025-09-18 10:34:01.678337 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678346 | orchestrator | 2025-09-18 10:34:01.678354 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678363 | orchestrator | Thursday 18 September 2025 10:33:53 +0000 (0:00:00.197) 0:01:01.053 **** 2025-09-18 10:34:01.678372 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678381 | orchestrator | 2025-09-18 10:34:01.678389 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678398 | orchestrator | Thursday 18 September 2025 10:33:53 +0000 (0:00:00.207) 0:01:01.260 **** 2025-09-18 10:34:01.678407 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678416 | orchestrator | 2025-09-18 10:34:01.678424 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678433 | orchestrator | Thursday 18 September 2025 10:33:54 +0000 (0:00:00.218) 0:01:01.479 **** 2025-09-18 10:34:01.678442 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678463 | orchestrator | 2025-09-18 10:34:01.678472 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678481 | orchestrator | Thursday 18 September 2025 10:33:54 +0000 (0:00:00.206) 0:01:01.686 **** 2025-09-18 10:34:01.678490 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678498 | orchestrator | 2025-09-18 10:34:01.678516 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678525 | orchestrator | Thursday 18 September 2025 10:33:54 +0000 (0:00:00.243) 0:01:01.929 **** 2025-09-18 10:34:01.678534 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-09-18 10:34:01.678545 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-09-18 10:34:01.678568 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-09-18 10:34:01.678578 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-09-18 10:34:01.678588 | orchestrator | 2025-09-18 10:34:01.678598 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678608 | orchestrator | Thursday 18 September 2025 10:33:55 +0000 (0:00:00.686) 0:01:02.616 **** 2025-09-18 10:34:01.678618 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678628 | orchestrator | 2025-09-18 10:34:01.678638 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678647 | orchestrator | Thursday 18 September 2025 10:33:55 +0000 (0:00:00.198) 0:01:02.815 **** 2025-09-18 10:34:01.678657 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678667 | orchestrator | 2025-09-18 10:34:01.678677 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678687 | orchestrator | Thursday 18 September 2025 10:33:55 +0000 (0:00:00.213) 0:01:03.028 **** 2025-09-18 10:34:01.678695 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678704 | orchestrator | 2025-09-18 10:34:01.678713 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-09-18 10:34:01.678721 | orchestrator | Thursday 18 September 2025 10:33:55 +0000 (0:00:00.211) 0:01:03.240 **** 2025-09-18 10:34:01.678730 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678739 | orchestrator | 2025-09-18 10:34:01.678747 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-09-18 10:34:01.678756 | orchestrator | Thursday 18 September 2025 10:33:56 +0000 (0:00:00.215) 0:01:03.455 **** 2025-09-18 10:34:01.678765 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678773 | orchestrator | 2025-09-18 10:34:01.678782 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-09-18 10:34:01.678791 | orchestrator | Thursday 18 September 2025 10:33:56 +0000 (0:00:00.350) 0:01:03.806 **** 2025-09-18 10:34:01.678799 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '47a403a8-a225-5ee6-9198-c4852ee3470e'}}) 2025-09-18 10:34:01.678809 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': 'a661e8c0-0419-5fc2-afc1-c6737c299168'}}) 2025-09-18 10:34:01.678827 | orchestrator | 2025-09-18 10:34:01.678836 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-09-18 10:34:01.678845 | orchestrator | Thursday 18 September 2025 10:33:56 +0000 (0:00:00.201) 0:01:04.007 **** 2025-09-18 10:34:01.678855 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'}) 2025-09-18 10:34:01.678865 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'}) 2025-09-18 10:34:01.678874 | orchestrator | 2025-09-18 10:34:01.678883 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-09-18 10:34:01.678908 | orchestrator | Thursday 18 September 2025 10:33:58 +0000 (0:00:01.894) 0:01:05.902 **** 2025-09-18 10:34:01.678917 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:01.678927 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:01.678936 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.678945 | orchestrator | 2025-09-18 10:34:01.678953 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-09-18 10:34:01.678962 | orchestrator | Thursday 18 September 2025 10:33:58 +0000 (0:00:00.167) 0:01:06.070 **** 2025-09-18 10:34:01.678971 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'}) 2025-09-18 10:34:01.678980 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'}) 2025-09-18 10:34:01.678989 | orchestrator | 2025-09-18 10:34:01.678997 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-09-18 10:34:01.679006 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:01.364) 0:01:07.434 **** 2025-09-18 10:34:01.679015 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:01.679024 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:01.679032 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679041 | orchestrator | 2025-09-18 10:34:01.679049 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-09-18 10:34:01.679058 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.164) 0:01:07.599 **** 2025-09-18 10:34:01.679067 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679075 | orchestrator | 2025-09-18 10:34:01.679084 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-09-18 10:34:01.679093 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.123) 0:01:07.722 **** 2025-09-18 10:34:01.679101 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:01.679167 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:01.679178 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679187 | orchestrator | 2025-09-18 10:34:01.679195 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-09-18 10:34:01.679204 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.165) 0:01:07.887 **** 2025-09-18 10:34:01.679213 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679228 | orchestrator | 2025-09-18 10:34:01.679237 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-09-18 10:34:01.679246 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.142) 0:01:08.030 **** 2025-09-18 10:34:01.679254 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:01.679263 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:01.679272 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679281 | orchestrator | 2025-09-18 10:34:01.679290 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-09-18 10:34:01.679298 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.167) 0:01:08.198 **** 2025-09-18 10:34:01.679307 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679316 | orchestrator | 2025-09-18 10:34:01.679325 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-09-18 10:34:01.679333 | orchestrator | Thursday 18 September 2025 10:34:00 +0000 (0:00:00.153) 0:01:08.351 **** 2025-09-18 10:34:01.679342 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:01.679351 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:01.679360 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:01.679368 | orchestrator | 2025-09-18 10:34:01.679377 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-09-18 10:34:01.679386 | orchestrator | Thursday 18 September 2025 10:34:01 +0000 (0:00:00.145) 0:01:08.497 **** 2025-09-18 10:34:01.679395 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:01.679403 | orchestrator | 2025-09-18 10:34:01.679412 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-09-18 10:34:01.679421 | orchestrator | Thursday 18 September 2025 10:34:01 +0000 (0:00:00.404) 0:01:08.901 **** 2025-09-18 10:34:01.679436 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:07.988698 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:07.988818 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.988845 | orchestrator | 2025-09-18 10:34:07.988866 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-09-18 10:34:07.988879 | orchestrator | Thursday 18 September 2025 10:34:01 +0000 (0:00:00.161) 0:01:09.063 **** 2025-09-18 10:34:07.988889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:07.988900 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:07.988910 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.988921 | orchestrator | 2025-09-18 10:34:07.988932 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-09-18 10:34:07.988942 | orchestrator | Thursday 18 September 2025 10:34:01 +0000 (0:00:00.149) 0:01:09.212 **** 2025-09-18 10:34:07.988952 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:07.988962 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:07.988972 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989004 | orchestrator | 2025-09-18 10:34:07.989015 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-09-18 10:34:07.989025 | orchestrator | Thursday 18 September 2025 10:34:01 +0000 (0:00:00.159) 0:01:09.372 **** 2025-09-18 10:34:07.989035 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989044 | orchestrator | 2025-09-18 10:34:07.989054 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-09-18 10:34:07.989064 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.158) 0:01:09.530 **** 2025-09-18 10:34:07.989073 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989083 | orchestrator | 2025-09-18 10:34:07.989093 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-09-18 10:34:07.989130 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.145) 0:01:09.676 **** 2025-09-18 10:34:07.989141 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989151 | orchestrator | 2025-09-18 10:34:07.989161 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-09-18 10:34:07.989171 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.146) 0:01:09.823 **** 2025-09-18 10:34:07.989181 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:34:07.989192 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-09-18 10:34:07.989202 | orchestrator | } 2025-09-18 10:34:07.989212 | orchestrator | 2025-09-18 10:34:07.989222 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-09-18 10:34:07.989234 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.152) 0:01:09.975 **** 2025-09-18 10:34:07.989244 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:34:07.989256 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-09-18 10:34:07.989267 | orchestrator | } 2025-09-18 10:34:07.989308 | orchestrator | 2025-09-18 10:34:07.989319 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-09-18 10:34:07.989331 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.142) 0:01:10.118 **** 2025-09-18 10:34:07.989342 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:34:07.989354 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-09-18 10:34:07.989365 | orchestrator | } 2025-09-18 10:34:07.989376 | orchestrator | 2025-09-18 10:34:07.989387 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-09-18 10:34:07.989399 | orchestrator | Thursday 18 September 2025 10:34:02 +0000 (0:00:00.151) 0:01:10.270 **** 2025-09-18 10:34:07.989411 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:07.989422 | orchestrator | 2025-09-18 10:34:07.989433 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-09-18 10:34:07.989444 | orchestrator | Thursday 18 September 2025 10:34:03 +0000 (0:00:00.528) 0:01:10.798 **** 2025-09-18 10:34:07.989455 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:07.989466 | orchestrator | 2025-09-18 10:34:07.989477 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-09-18 10:34:07.989488 | orchestrator | Thursday 18 September 2025 10:34:03 +0000 (0:00:00.535) 0:01:11.333 **** 2025-09-18 10:34:07.989499 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:07.989510 | orchestrator | 2025-09-18 10:34:07.989520 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-09-18 10:34:07.989532 | orchestrator | Thursday 18 September 2025 10:34:04 +0000 (0:00:00.721) 0:01:12.057 **** 2025-09-18 10:34:07.989543 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:07.989554 | orchestrator | 2025-09-18 10:34:07.989565 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-09-18 10:34:07.989575 | orchestrator | Thursday 18 September 2025 10:34:04 +0000 (0:00:00.165) 0:01:12.222 **** 2025-09-18 10:34:07.989585 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989595 | orchestrator | 2025-09-18 10:34:07.989604 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-09-18 10:34:07.989614 | orchestrator | Thursday 18 September 2025 10:34:04 +0000 (0:00:00.132) 0:01:12.354 **** 2025-09-18 10:34:07.989633 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989642 | orchestrator | 2025-09-18 10:34:07.989652 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-09-18 10:34:07.989662 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.140) 0:01:12.494 **** 2025-09-18 10:34:07.989672 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:34:07.989682 | orchestrator |  "vgs_report": { 2025-09-18 10:34:07.989692 | orchestrator |  "vg": [] 2025-09-18 10:34:07.989721 | orchestrator |  } 2025-09-18 10:34:07.989732 | orchestrator | } 2025-09-18 10:34:07.989742 | orchestrator | 2025-09-18 10:34:07.989753 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-09-18 10:34:07.989772 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.156) 0:01:12.651 **** 2025-09-18 10:34:07.989789 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989805 | orchestrator | 2025-09-18 10:34:07.989822 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-09-18 10:34:07.989839 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.144) 0:01:12.795 **** 2025-09-18 10:34:07.989855 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989870 | orchestrator | 2025-09-18 10:34:07.989880 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-09-18 10:34:07.989889 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.140) 0:01:12.936 **** 2025-09-18 10:34:07.989899 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989909 | orchestrator | 2025-09-18 10:34:07.989919 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-09-18 10:34:07.989929 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.142) 0:01:13.078 **** 2025-09-18 10:34:07.989939 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.989949 | orchestrator | 2025-09-18 10:34:07.989959 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-09-18 10:34:07.989986 | orchestrator | Thursday 18 September 2025 10:34:05 +0000 (0:00:00.176) 0:01:13.254 **** 2025-09-18 10:34:07.989996 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990006 | orchestrator | 2025-09-18 10:34:07.990067 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-09-18 10:34:07.990079 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.149) 0:01:13.404 **** 2025-09-18 10:34:07.990089 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990099 | orchestrator | 2025-09-18 10:34:07.990135 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-09-18 10:34:07.990145 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.147) 0:01:13.552 **** 2025-09-18 10:34:07.990155 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990164 | orchestrator | 2025-09-18 10:34:07.990174 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-09-18 10:34:07.990184 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.136) 0:01:13.688 **** 2025-09-18 10:34:07.990193 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990203 | orchestrator | 2025-09-18 10:34:07.990213 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-09-18 10:34:07.990222 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.141) 0:01:13.830 **** 2025-09-18 10:34:07.990232 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990242 | orchestrator | 2025-09-18 10:34:07.990251 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-09-18 10:34:07.990266 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.343) 0:01:14.173 **** 2025-09-18 10:34:07.990277 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990286 | orchestrator | 2025-09-18 10:34:07.990296 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-09-18 10:34:07.990306 | orchestrator | Thursday 18 September 2025 10:34:06 +0000 (0:00:00.149) 0:01:14.322 **** 2025-09-18 10:34:07.990315 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990334 | orchestrator | 2025-09-18 10:34:07.990343 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-09-18 10:34:07.990353 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.155) 0:01:14.478 **** 2025-09-18 10:34:07.990363 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990373 | orchestrator | 2025-09-18 10:34:07.990383 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-09-18 10:34:07.990392 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.140) 0:01:14.619 **** 2025-09-18 10:34:07.990402 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990412 | orchestrator | 2025-09-18 10:34:07.990422 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-09-18 10:34:07.990432 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.155) 0:01:14.774 **** 2025-09-18 10:34:07.990441 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990451 | orchestrator | 2025-09-18 10:34:07.990461 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-09-18 10:34:07.990471 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.135) 0:01:14.910 **** 2025-09-18 10:34:07.990481 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:07.990491 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:07.990501 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990511 | orchestrator | 2025-09-18 10:34:07.990520 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-09-18 10:34:07.990530 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.155) 0:01:15.065 **** 2025-09-18 10:34:07.990540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:07.990550 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:07.990560 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:07.990569 | orchestrator | 2025-09-18 10:34:07.990579 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-09-18 10:34:07.990589 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.152) 0:01:15.218 **** 2025-09-18 10:34:07.990608 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.102740 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.102823 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.102838 | orchestrator | 2025-09-18 10:34:11.102849 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-09-18 10:34:11.102860 | orchestrator | Thursday 18 September 2025 10:34:07 +0000 (0:00:00.159) 0:01:15.378 **** 2025-09-18 10:34:11.102870 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.102880 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.102890 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.102900 | orchestrator | 2025-09-18 10:34:11.102910 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-09-18 10:34:11.102919 | orchestrator | Thursday 18 September 2025 10:34:08 +0000 (0:00:00.179) 0:01:15.558 **** 2025-09-18 10:34:11.102929 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.102959 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.102969 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.102979 | orchestrator | 2025-09-18 10:34:11.102989 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-09-18 10:34:11.102998 | orchestrator | Thursday 18 September 2025 10:34:08 +0000 (0:00:00.158) 0:01:15.717 **** 2025-09-18 10:34:11.103008 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103018 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103027 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103037 | orchestrator | 2025-09-18 10:34:11.103058 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-09-18 10:34:11.103068 | orchestrator | Thursday 18 September 2025 10:34:08 +0000 (0:00:00.150) 0:01:15.867 **** 2025-09-18 10:34:11.103078 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103088 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103118 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103130 | orchestrator | 2025-09-18 10:34:11.103140 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-09-18 10:34:11.103150 | orchestrator | Thursday 18 September 2025 10:34:08 +0000 (0:00:00.386) 0:01:16.253 **** 2025-09-18 10:34:11.103160 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103170 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103180 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103190 | orchestrator | 2025-09-18 10:34:11.103200 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-09-18 10:34:11.103209 | orchestrator | Thursday 18 September 2025 10:34:09 +0000 (0:00:00.155) 0:01:16.409 **** 2025-09-18 10:34:11.103219 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:11.103230 | orchestrator | 2025-09-18 10:34:11.103239 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-09-18 10:34:11.103249 | orchestrator | Thursday 18 September 2025 10:34:09 +0000 (0:00:00.548) 0:01:16.957 **** 2025-09-18 10:34:11.103259 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:11.103269 | orchestrator | 2025-09-18 10:34:11.103280 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-09-18 10:34:11.103290 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.521) 0:01:17.478 **** 2025-09-18 10:34:11.103301 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:11.103312 | orchestrator | 2025-09-18 10:34:11.103323 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-09-18 10:34:11.103334 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.158) 0:01:17.637 **** 2025-09-18 10:34:11.103345 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'vg_name': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'}) 2025-09-18 10:34:11.103362 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'vg_name': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'}) 2025-09-18 10:34:11.103379 | orchestrator | 2025-09-18 10:34:11.103395 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-09-18 10:34:11.103424 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.171) 0:01:17.808 **** 2025-09-18 10:34:11.103462 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103480 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103491 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103501 | orchestrator | 2025-09-18 10:34:11.103511 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-09-18 10:34:11.103521 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.164) 0:01:17.973 **** 2025-09-18 10:34:11.103530 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103540 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103550 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103560 | orchestrator | 2025-09-18 10:34:11.103570 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-09-18 10:34:11.103580 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.171) 0:01:18.145 **** 2025-09-18 10:34:11.103590 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'})  2025-09-18 10:34:11.103599 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'})  2025-09-18 10:34:11.103609 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:11.103619 | orchestrator | 2025-09-18 10:34:11.103629 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-09-18 10:34:11.103638 | orchestrator | Thursday 18 September 2025 10:34:10 +0000 (0:00:00.157) 0:01:18.303 **** 2025-09-18 10:34:11.103648 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:34:11.103658 | orchestrator |  "lvm_report": { 2025-09-18 10:34:11.103668 | orchestrator |  "lv": [ 2025-09-18 10:34:11.103677 | orchestrator |  { 2025-09-18 10:34:11.103687 | orchestrator |  "lv_name": "osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e", 2025-09-18 10:34:11.103702 | orchestrator |  "vg_name": "ceph-47a403a8-a225-5ee6-9198-c4852ee3470e" 2025-09-18 10:34:11.103712 | orchestrator |  }, 2025-09-18 10:34:11.103722 | orchestrator |  { 2025-09-18 10:34:11.103732 | orchestrator |  "lv_name": "osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168", 2025-09-18 10:34:11.103742 | orchestrator |  "vg_name": "ceph-a661e8c0-0419-5fc2-afc1-c6737c299168" 2025-09-18 10:34:11.103751 | orchestrator |  } 2025-09-18 10:34:11.103761 | orchestrator |  ], 2025-09-18 10:34:11.103770 | orchestrator |  "pv": [ 2025-09-18 10:34:11.103780 | orchestrator |  { 2025-09-18 10:34:11.103790 | orchestrator |  "pv_name": "/dev/sdb", 2025-09-18 10:34:11.103800 | orchestrator |  "vg_name": "ceph-47a403a8-a225-5ee6-9198-c4852ee3470e" 2025-09-18 10:34:11.103810 | orchestrator |  }, 2025-09-18 10:34:11.103819 | orchestrator |  { 2025-09-18 10:34:11.103829 | orchestrator |  "pv_name": "/dev/sdc", 2025-09-18 10:34:11.103839 | orchestrator |  "vg_name": "ceph-a661e8c0-0419-5fc2-afc1-c6737c299168" 2025-09-18 10:34:11.103849 | orchestrator |  } 2025-09-18 10:34:11.103858 | orchestrator |  ] 2025-09-18 10:34:11.103868 | orchestrator |  } 2025-09-18 10:34:11.103877 | orchestrator | } 2025-09-18 10:34:11.103887 | orchestrator | 2025-09-18 10:34:11.103897 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:34:11.103913 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 10:34:11.103923 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 10:34:11.103933 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-09-18 10:34:11.103943 | orchestrator | 2025-09-18 10:34:11.103952 | orchestrator | 2025-09-18 10:34:11.103962 | orchestrator | 2025-09-18 10:34:11.103972 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:34:11.103981 | orchestrator | Thursday 18 September 2025 10:34:11 +0000 (0:00:00.158) 0:01:18.461 **** 2025-09-18 10:34:11.103991 | orchestrator | =============================================================================== 2025-09-18 10:34:11.104001 | orchestrator | Create block VGs -------------------------------------------------------- 5.88s 2025-09-18 10:34:11.104010 | orchestrator | Create block LVs -------------------------------------------------------- 4.40s 2025-09-18 10:34:11.104020 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.02s 2025-09-18 10:34:11.104029 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.76s 2025-09-18 10:34:11.104039 | orchestrator | Add known partitions to the list of available block devices ------------- 1.70s 2025-09-18 10:34:11.104049 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.66s 2025-09-18 10:34:11.104058 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.61s 2025-09-18 10:34:11.104068 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.56s 2025-09-18 10:34:11.104084 | orchestrator | Add known partitions to the list of available block devices ------------- 1.48s 2025-09-18 10:34:11.503667 | orchestrator | Add known links to the list of available block devices ------------------ 1.40s 2025-09-18 10:34:11.503764 | orchestrator | Print LVM report data --------------------------------------------------- 1.05s 2025-09-18 10:34:11.503783 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 1.03s 2025-09-18 10:34:11.503797 | orchestrator | Add known links to the list of available block devices ------------------ 1.02s 2025-09-18 10:34:11.503810 | orchestrator | Add known partitions to the list of available block devices ------------- 0.94s 2025-09-18 10:34:11.503823 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.85s 2025-09-18 10:34:11.503837 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.76s 2025-09-18 10:34:11.503851 | orchestrator | Print 'Create DB VGs' --------------------------------------------------- 0.75s 2025-09-18 10:34:11.503864 | orchestrator | Create DB LVs for ceph_db_wal_devices ----------------------------------- 0.75s 2025-09-18 10:34:11.503878 | orchestrator | Add known links to the list of available block devices ------------------ 0.74s 2025-09-18 10:34:11.503892 | orchestrator | Get initial list of available block devices ----------------------------- 0.74s 2025-09-18 10:34:24.152622 | orchestrator | 2025-09-18 10:34:24 | INFO  | Task ccc3bf40-99fe-4686-9d07-9f99d15a3bb3 (facts) was prepared for execution. 2025-09-18 10:34:24.152713 | orchestrator | 2025-09-18 10:34:24 | INFO  | It takes a moment until task ccc3bf40-99fe-4686-9d07-9f99d15a3bb3 (facts) has been started and output is visible here. 2025-09-18 10:34:37.168497 | orchestrator | 2025-09-18 10:34:37.168611 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-09-18 10:34:37.169291 | orchestrator | 2025-09-18 10:34:37.169312 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-09-18 10:34:37.169325 | orchestrator | Thursday 18 September 2025 10:34:28 +0000 (0:00:00.355) 0:00:00.355 **** 2025-09-18 10:34:37.169339 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:34:37.169353 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:34:37.169387 | orchestrator | ok: [testbed-manager] 2025-09-18 10:34:37.169400 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:34:37.169413 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:34:37.169427 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:34:37.169448 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:37.169467 | orchestrator | 2025-09-18 10:34:37.169487 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-09-18 10:34:37.169505 | orchestrator | Thursday 18 September 2025 10:34:30 +0000 (0:00:01.188) 0:00:01.543 **** 2025-09-18 10:34:37.169538 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:34:37.169560 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:34:37.169578 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:34:37.169600 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:34:37.169621 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:34:37.169641 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:34:37.169657 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:37.169668 | orchestrator | 2025-09-18 10:34:37.169679 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-09-18 10:34:37.169690 | orchestrator | 2025-09-18 10:34:37.169701 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-09-18 10:34:37.169712 | orchestrator | Thursday 18 September 2025 10:34:31 +0000 (0:00:01.278) 0:00:02.822 **** 2025-09-18 10:34:37.169724 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:34:37.169735 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:34:37.169746 | orchestrator | ok: [testbed-manager] 2025-09-18 10:34:37.169757 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:34:37.169767 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:34:37.169778 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:34:37.169789 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:34:37.169821 | orchestrator | 2025-09-18 10:34:37.169834 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-09-18 10:34:37.169844 | orchestrator | 2025-09-18 10:34:37.169855 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-09-18 10:34:37.169867 | orchestrator | Thursday 18 September 2025 10:34:36 +0000 (0:00:04.957) 0:00:07.779 **** 2025-09-18 10:34:37.169878 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:34:37.169889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:34:37.169900 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:34:37.169911 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:34:37.169922 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:34:37.169933 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:34:37.169944 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:34:37.169954 | orchestrator | 2025-09-18 10:34:37.169966 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:34:37.169977 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.169989 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170001 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170068 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170109 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170121 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170132 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:34:37.170157 | orchestrator | 2025-09-18 10:34:37.170168 | orchestrator | 2025-09-18 10:34:37.170179 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:34:37.170190 | orchestrator | Thursday 18 September 2025 10:34:36 +0000 (0:00:00.512) 0:00:08.292 **** 2025-09-18 10:34:37.170201 | orchestrator | =============================================================================== 2025-09-18 10:34:37.170212 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.96s 2025-09-18 10:34:37.170223 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.28s 2025-09-18 10:34:37.170233 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2025-09-18 10:34:37.170245 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-09-18 10:34:49.541448 | orchestrator | 2025-09-18 10:34:49 | INFO  | Task 4378323b-c9f4-4866-9738-c721511c6f0d (frr) was prepared for execution. 2025-09-18 10:34:49.541565 | orchestrator | 2025-09-18 10:34:49 | INFO  | It takes a moment until task 4378323b-c9f4-4866-9738-c721511c6f0d (frr) has been started and output is visible here. 2025-09-18 10:35:15.443846 | orchestrator | 2025-09-18 10:35:15.443945 | orchestrator | PLAY [Apply role frr] ********************************************************** 2025-09-18 10:35:15.443962 | orchestrator | 2025-09-18 10:35:15.443975 | orchestrator | TASK [osism.services.frr : Include distribution specific install tasks] ******** 2025-09-18 10:35:15.443987 | orchestrator | Thursday 18 September 2025 10:34:53 +0000 (0:00:00.238) 0:00:00.238 **** 2025-09-18 10:35:15.443999 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/frr/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:35:15.444012 | orchestrator | 2025-09-18 10:35:15.444059 | orchestrator | TASK [osism.services.frr : Pin frr package version] **************************** 2025-09-18 10:35:15.444071 | orchestrator | Thursday 18 September 2025 10:34:53 +0000 (0:00:00.238) 0:00:00.476 **** 2025-09-18 10:35:15.444083 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:15.444095 | orchestrator | 2025-09-18 10:35:15.444106 | orchestrator | TASK [osism.services.frr : Install frr package] ******************************** 2025-09-18 10:35:15.444117 | orchestrator | Thursday 18 September 2025 10:34:55 +0000 (0:00:01.121) 0:00:01.598 **** 2025-09-18 10:35:15.444129 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:15.444140 | orchestrator | 2025-09-18 10:35:15.444151 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/vtysh.conf] ********************* 2025-09-18 10:35:15.444162 | orchestrator | Thursday 18 September 2025 10:35:05 +0000 (0:00:10.030) 0:00:11.628 **** 2025-09-18 10:35:15.444173 | orchestrator | ok: [testbed-manager] 2025-09-18 10:35:15.444185 | orchestrator | 2025-09-18 10:35:15.444196 | orchestrator | TASK [osism.services.frr : Copy file: /etc/frr/daemons] ************************ 2025-09-18 10:35:15.444207 | orchestrator | Thursday 18 September 2025 10:35:06 +0000 (0:00:01.424) 0:00:13.053 **** 2025-09-18 10:35:15.444218 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:15.444229 | orchestrator | 2025-09-18 10:35:15.444240 | orchestrator | TASK [osism.services.frr : Set _frr_uplinks fact] ****************************** 2025-09-18 10:35:15.444252 | orchestrator | Thursday 18 September 2025 10:35:07 +0000 (0:00:01.051) 0:00:14.104 **** 2025-09-18 10:35:15.444263 | orchestrator | ok: [testbed-manager] 2025-09-18 10:35:15.444274 | orchestrator | 2025-09-18 10:35:15.444302 | orchestrator | TASK [osism.services.frr : Check for frr.conf file in the configuration repository] *** 2025-09-18 10:35:15.444313 | orchestrator | Thursday 18 September 2025 10:35:08 +0000 (0:00:01.235) 0:00:15.340 **** 2025-09-18 10:35:15.444325 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:35:15.444336 | orchestrator | 2025-09-18 10:35:15.444347 | orchestrator | TASK [osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf] *** 2025-09-18 10:35:15.444358 | orchestrator | Thursday 18 September 2025 10:35:09 +0000 (0:00:00.804) 0:00:16.145 **** 2025-09-18 10:35:15.444369 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:35:15.444380 | orchestrator | 2025-09-18 10:35:15.444392 | orchestrator | TASK [osism.services.frr : Copy file from the role: /etc/frr/frr.conf] ********* 2025-09-18 10:35:15.444426 | orchestrator | Thursday 18 September 2025 10:35:09 +0000 (0:00:00.147) 0:00:16.293 **** 2025-09-18 10:35:15.444439 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:15.444451 | orchestrator | 2025-09-18 10:35:15.444464 | orchestrator | TASK [osism.services.frr : Set sysctl parameters] ****************************** 2025-09-18 10:35:15.444476 | orchestrator | Thursday 18 September 2025 10:35:10 +0000 (0:00:00.935) 0:00:17.229 **** 2025-09-18 10:35:15.444489 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.ip_forward', 'value': 1}) 2025-09-18 10:35:15.444501 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.send_redirects', 'value': 0}) 2025-09-18 10:35:15.444514 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.accept_redirects', 'value': 0}) 2025-09-18 10:35:15.444526 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.fib_multipath_hash_policy', 'value': 1}) 2025-09-18 10:35:15.444539 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.default.ignore_routes_with_linkdown', 'value': 1}) 2025-09-18 10:35:15.444551 | orchestrator | changed: [testbed-manager] => (item={'name': 'net.ipv4.conf.all.rp_filter', 'value': 2}) 2025-09-18 10:35:15.444563 | orchestrator | 2025-09-18 10:35:15.444575 | orchestrator | TASK [osism.services.frr : Manage frr service] ********************************* 2025-09-18 10:35:15.444588 | orchestrator | Thursday 18 September 2025 10:35:12 +0000 (0:00:02.017) 0:00:19.247 **** 2025-09-18 10:35:15.444600 | orchestrator | ok: [testbed-manager] 2025-09-18 10:35:15.444612 | orchestrator | 2025-09-18 10:35:15.444624 | orchestrator | RUNNING HANDLER [osism.services.frr : Restart frr service] ********************* 2025-09-18 10:35:15.444648 | orchestrator | Thursday 18 September 2025 10:35:13 +0000 (0:00:01.219) 0:00:20.466 **** 2025-09-18 10:35:15.444661 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:15.444673 | orchestrator | 2025-09-18 10:35:15.444685 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:35:15.444697 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:35:15.444710 | orchestrator | 2025-09-18 10:35:15.444721 | orchestrator | 2025-09-18 10:35:15.444733 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:35:15.444746 | orchestrator | Thursday 18 September 2025 10:35:15 +0000 (0:00:01.339) 0:00:21.805 **** 2025-09-18 10:35:15.444758 | orchestrator | =============================================================================== 2025-09-18 10:35:15.444770 | orchestrator | osism.services.frr : Install frr package ------------------------------- 10.03s 2025-09-18 10:35:15.444781 | orchestrator | osism.services.frr : Set sysctl parameters ------------------------------ 2.02s 2025-09-18 10:35:15.444792 | orchestrator | osism.services.frr : Copy file: /etc/frr/vtysh.conf --------------------- 1.42s 2025-09-18 10:35:15.444802 | orchestrator | osism.services.frr : Restart frr service -------------------------------- 1.34s 2025-09-18 10:35:15.444831 | orchestrator | osism.services.frr : Set _frr_uplinks fact ------------------------------ 1.24s 2025-09-18 10:35:15.444842 | orchestrator | osism.services.frr : Manage frr service --------------------------------- 1.22s 2025-09-18 10:35:15.444853 | orchestrator | osism.services.frr : Pin frr package version ---------------------------- 1.12s 2025-09-18 10:35:15.444865 | orchestrator | osism.services.frr : Copy file: /etc/frr/daemons ------------------------ 1.05s 2025-09-18 10:35:15.444876 | orchestrator | osism.services.frr : Copy file from the role: /etc/frr/frr.conf --------- 0.94s 2025-09-18 10:35:15.444887 | orchestrator | osism.services.frr : Check for frr.conf file in the configuration repository --- 0.81s 2025-09-18 10:35:15.444898 | orchestrator | osism.services.frr : Include distribution specific install tasks -------- 0.24s 2025-09-18 10:35:15.444909 | orchestrator | osism.services.frr : Copy file from the configuration repository: /etc/frr/frr.conf --- 0.15s 2025-09-18 10:35:15.653468 | orchestrator | 2025-09-18 10:35:15.655338 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu Sep 18 10:35:15 UTC 2025 2025-09-18 10:35:15.655388 | orchestrator | 2025-09-18 10:35:17.625935 | orchestrator | 2025-09-18 10:35:17 | INFO  | Collection nutshell is prepared for execution 2025-09-18 10:35:17.626122 | orchestrator | 2025-09-18 10:35:17 | INFO  | D [0] - dotfiles 2025-09-18 10:35:27.772818 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [0] - homer 2025-09-18 10:35:27.772930 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [0] - netdata 2025-09-18 10:35:27.772946 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [0] - openstackclient 2025-09-18 10:35:27.772958 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [0] - phpmyadmin 2025-09-18 10:35:27.773292 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [0] - common 2025-09-18 10:35:27.777063 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [1] -- loadbalancer 2025-09-18 10:35:27.777244 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [2] --- opensearch 2025-09-18 10:35:27.777805 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [2] --- mariadb-ng 2025-09-18 10:35:27.777838 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [3] ---- horizon 2025-09-18 10:35:27.778373 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [3] ---- keystone 2025-09-18 10:35:27.778410 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [4] ----- neutron 2025-09-18 10:35:27.779272 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ wait-for-nova 2025-09-18 10:35:27.779317 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [5] ------ octavia 2025-09-18 10:35:27.780761 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- barbican 2025-09-18 10:35:27.781221 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- designate 2025-09-18 10:35:27.781871 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- ironic 2025-09-18 10:35:27.782517 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- placement 2025-09-18 10:35:27.782913 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- magnum 2025-09-18 10:35:27.784447 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [1] -- openvswitch 2025-09-18 10:35:27.784525 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [2] --- ovn 2025-09-18 10:35:27.785170 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [1] -- memcached 2025-09-18 10:35:27.785508 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [1] -- redis 2025-09-18 10:35:27.785868 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [1] -- rabbitmq-ng 2025-09-18 10:35:27.786405 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [0] - kubernetes 2025-09-18 10:35:27.790388 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [1] -- kubeconfig 2025-09-18 10:35:27.790442 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [1] -- copy-kubeconfig 2025-09-18 10:35:27.790947 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [0] - ceph 2025-09-18 10:35:27.794246 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [1] -- ceph-pools 2025-09-18 10:35:27.794374 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [2] --- copy-ceph-keys 2025-09-18 10:35:27.794396 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [3] ---- cephclient 2025-09-18 10:35:27.794408 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-09-18 10:35:27.794703 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [4] ----- wait-for-keystone 2025-09-18 10:35:27.794733 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ kolla-ceph-rgw 2025-09-18 10:35:27.795087 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ glance 2025-09-18 10:35:27.795110 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ cinder 2025-09-18 10:35:27.795318 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ nova 2025-09-18 10:35:27.795629 | orchestrator | 2025-09-18 10:35:27 | INFO  | A [4] ----- prometheus 2025-09-18 10:35:27.795650 | orchestrator | 2025-09-18 10:35:27 | INFO  | D [5] ------ grafana 2025-09-18 10:35:27.983671 | orchestrator | 2025-09-18 10:35:27 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-09-18 10:35:27.983767 | orchestrator | 2025-09-18 10:35:27 | INFO  | Tasks are running in the background 2025-09-18 10:35:31.370089 | orchestrator | 2025-09-18 10:35:31 | INFO  | No task IDs specified, wait for all currently running tasks 2025-09-18 10:35:33.481273 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:33.481380 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:33.485326 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:33.485726 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:33.486403 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:33.486944 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:33.487488 | orchestrator | 2025-09-18 10:35:33 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:33.487515 | orchestrator | 2025-09-18 10:35:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:36.542388 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:36.542458 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:36.542465 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:36.542471 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:36.542477 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:36.542482 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:36.542487 | orchestrator | 2025-09-18 10:35:36 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:36.542493 | orchestrator | 2025-09-18 10:35:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:39.581532 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:39.581645 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:39.581992 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:39.582541 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:39.586409 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:39.586870 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:39.587627 | orchestrator | 2025-09-18 10:35:39 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:39.587653 | orchestrator | 2025-09-18 10:35:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:42.706520 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:42.706602 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:42.706616 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:42.706627 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:42.706638 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:42.706649 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:42.706663 | orchestrator | 2025-09-18 10:35:42 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:42.706682 | orchestrator | 2025-09-18 10:35:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:45.710423 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:45.710520 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:45.710538 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:45.710557 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:45.710573 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:45.710584 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:45.710595 | orchestrator | 2025-09-18 10:35:45 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:45.710607 | orchestrator | 2025-09-18 10:35:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:48.785370 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:48.785678 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:48.787222 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:48.791882 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:48.792384 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:48.793743 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:48.795244 | orchestrator | 2025-09-18 10:35:48 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:48.795285 | orchestrator | 2025-09-18 10:35:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:52.045641 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:52.045769 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:52.045795 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state STARTED 2025-09-18 10:35:52.045814 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:52.045866 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:52.045886 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:52.045903 | orchestrator | 2025-09-18 10:35:52 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:52.045921 | orchestrator | 2025-09-18 10:35:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:55.105108 | orchestrator | 2025-09-18 10:35:55.105207 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-09-18 10:35:55.105222 | orchestrator | 2025-09-18 10:35:55.105234 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-09-18 10:35:55.105246 | orchestrator | Thursday 18 September 2025 10:35:39 +0000 (0:00:00.792) 0:00:00.792 **** 2025-09-18 10:35:55.105258 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:35:55.105270 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:35:55.105281 | orchestrator | changed: [testbed-manager] 2025-09-18 10:35:55.105293 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:35:55.105303 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:35:55.105314 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:35:55.105326 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:35:55.105336 | orchestrator | 2025-09-18 10:35:55.105348 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-09-18 10:35:55.105359 | orchestrator | Thursday 18 September 2025 10:35:44 +0000 (0:00:04.887) 0:00:05.680 **** 2025-09-18 10:35:55.105370 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 10:35:55.105382 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-18 10:35:55.105393 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 10:35:55.105404 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 10:35:55.105415 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 10:35:55.105426 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 10:35:55.105436 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 10:35:55.105447 | orchestrator | 2025-09-18 10:35:55.105459 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-09-18 10:35:55.105471 | orchestrator | Thursday 18 September 2025 10:35:47 +0000 (0:00:02.507) 0:00:08.187 **** 2025-09-18 10:35:55.105487 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:45.263123', 'end': '2025-09-18 10:35:45.271509', 'delta': '0:00:00.008386', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105513 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:45.722687', 'end': '2025-09-18 10:35:45.729563', 'delta': '0:00:00.006876', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105547 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:45.413420', 'end': '2025-09-18 10:35:45.419889', 'delta': '0:00:00.006469', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105588 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:45.299480', 'end': '2025-09-18 10:35:45.305938', 'delta': '0:00:00.006458', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105602 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:46.643879', 'end': '2025-09-18 10:35:46.653594', 'delta': '0:00:00.009715', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105615 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:46.805258', 'end': '2025-09-18 10:35:46.811835', 'delta': '0:00:00.006577', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105928 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-09-18 10:35:46.020401', 'end': '2025-09-18 10:35:46.027540', 'delta': '0:00:00.007139', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-09-18 10:35:55.105962 | orchestrator | 2025-09-18 10:35:55.105974 | orchestrator | TASK [geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist.] **** 2025-09-18 10:35:55.106090 | orchestrator | Thursday 18 September 2025 10:35:48 +0000 (0:00:01.946) 0:00:10.134 **** 2025-09-18 10:35:55.106104 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 10:35:55.106115 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 10:35:55.106126 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 10:35:55.106137 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 10:35:55.106147 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-09-18 10:35:55.106158 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 10:35:55.106169 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 10:35:55.106180 | orchestrator | 2025-09-18 10:35:55.106191 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-09-18 10:35:55.106202 | orchestrator | Thursday 18 September 2025 10:35:51 +0000 (0:00:02.085) 0:00:12.219 **** 2025-09-18 10:35:55.106213 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-09-18 10:35:55.106224 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-09-18 10:35:55.106235 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-09-18 10:35:55.106245 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-09-18 10:35:55.106256 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-09-18 10:35:55.106267 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-09-18 10:35:55.106277 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-09-18 10:35:55.106288 | orchestrator | 2025-09-18 10:35:55.106299 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:35:55.106322 | orchestrator | testbed-manager : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106335 | orchestrator | testbed-node-0 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106346 | orchestrator | testbed-node-1 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106357 | orchestrator | testbed-node-2 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106368 | orchestrator | testbed-node-3 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106379 | orchestrator | testbed-node-4 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106390 | orchestrator | testbed-node-5 : ok=5  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:35:55.106401 | orchestrator | 2025-09-18 10:35:55.106412 | orchestrator | 2025-09-18 10:35:55.106423 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:35:55.106434 | orchestrator | Thursday 18 September 2025 10:35:54 +0000 (0:00:03.060) 0:00:15.280 **** 2025-09-18 10:35:55.106444 | orchestrator | =============================================================================== 2025-09-18 10:35:55.106456 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 4.89s 2025-09-18 10:35:55.106466 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 3.06s 2025-09-18 10:35:55.106488 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 2.51s 2025-09-18 10:35:55.106499 | orchestrator | geerlingguy.dotfiles : Ensure parent folders of link dotfiles exist. ---- 2.09s 2025-09-18 10:35:55.106509 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.95s 2025-09-18 10:35:55.106521 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:55.106532 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:55.106543 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task c21f7438-e132-4a42-90b1-d775ed26bee4 is in state SUCCESS 2025-09-18 10:35:55.106554 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:55.107175 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:55.125334 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:55.125397 | orchestrator | 2025-09-18 10:35:55 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:55.125410 | orchestrator | 2025-09-18 10:35:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:35:58.160229 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:35:58.162462 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:35:58.163046 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:35:58.163748 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:35:58.167273 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:35:58.169301 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:35:58.170049 | orchestrator | 2025-09-18 10:35:58 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:35:58.172116 | orchestrator | 2025-09-18 10:35:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:01.321034 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:01.321890 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:01.321921 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:01.321934 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:01.321945 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:01.321957 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:01.321968 | orchestrator | 2025-09-18 10:36:01 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:01.322002 | orchestrator | 2025-09-18 10:36:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:04.383394 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:04.383501 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:04.383543 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:04.383555 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:04.475007 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:04.475102 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:04.475116 | orchestrator | 2025-09-18 10:36:04 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:04.475128 | orchestrator | 2025-09-18 10:36:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:07.431639 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:07.432054 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:07.432921 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:07.434304 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:07.434710 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:07.435566 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:07.436256 | orchestrator | 2025-09-18 10:36:07 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:07.436451 | orchestrator | 2025-09-18 10:36:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:10.483429 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:10.484274 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:10.486304 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:10.488155 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:10.491155 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:10.495022 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:10.495063 | orchestrator | 2025-09-18 10:36:10 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:10.495075 | orchestrator | 2025-09-18 10:36:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:13.763598 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:13.763690 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:13.763703 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:13.763714 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:13.763724 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:13.763734 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:13.763777 | orchestrator | 2025-09-18 10:36:13 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:13.763788 | orchestrator | 2025-09-18 10:36:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:16.613706 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:16.614392 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:16.614874 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:16.615650 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:16.616723 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:16.617512 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:16.619297 | orchestrator | 2025-09-18 10:36:16 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:16.619493 | orchestrator | 2025-09-18 10:36:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:19.664499 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:19.664598 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:19.664612 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state STARTED 2025-09-18 10:36:19.664624 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:19.664635 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:19.664647 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:19.664658 | orchestrator | 2025-09-18 10:36:19 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:19.664669 | orchestrator | 2025-09-18 10:36:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:22.801170 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:22.801276 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state STARTED 2025-09-18 10:36:22.801291 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task e491fb91-1d23-43a6-9b24-06e5b4137e30 is in state SUCCESS 2025-09-18 10:36:22.803180 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:22.820661 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:22.821377 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:22.822705 | orchestrator | 2025-09-18 10:36:22 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:22.823341 | orchestrator | 2025-09-18 10:36:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:25.888110 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:25.888213 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task fa919a81-abd0-4c09-91b1-05a3b23291a2 is in state SUCCESS 2025-09-18 10:36:25.888260 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:25.888273 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:25.888284 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:25.888295 | orchestrator | 2025-09-18 10:36:25 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:25.888306 | orchestrator | 2025-09-18 10:36:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:28.922597 | orchestrator | 2025-09-18 10:36:28 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:28.922688 | orchestrator | 2025-09-18 10:36:28 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:28.923730 | orchestrator | 2025-09-18 10:36:28 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:28.924502 | orchestrator | 2025-09-18 10:36:28 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:28.925385 | orchestrator | 2025-09-18 10:36:28 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:28.925407 | orchestrator | 2025-09-18 10:36:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:31.978367 | orchestrator | 2025-09-18 10:36:31 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:31.982188 | orchestrator | 2025-09-18 10:36:31 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:31.982236 | orchestrator | 2025-09-18 10:36:31 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:31.983740 | orchestrator | 2025-09-18 10:36:31 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:31.985584 | orchestrator | 2025-09-18 10:36:31 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:31.985886 | orchestrator | 2025-09-18 10:36:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:35.067445 | orchestrator | 2025-09-18 10:36:35 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:35.067534 | orchestrator | 2025-09-18 10:36:35 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:35.075604 | orchestrator | 2025-09-18 10:36:35 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:35.087261 | orchestrator | 2025-09-18 10:36:35 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:35.091130 | orchestrator | 2025-09-18 10:36:35 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:35.091159 | orchestrator | 2025-09-18 10:36:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:38.155595 | orchestrator | 2025-09-18 10:36:38 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:38.156112 | orchestrator | 2025-09-18 10:36:38 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:38.157568 | orchestrator | 2025-09-18 10:36:38 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:38.159153 | orchestrator | 2025-09-18 10:36:38 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:38.161378 | orchestrator | 2025-09-18 10:36:38 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:38.161449 | orchestrator | 2025-09-18 10:36:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:41.204483 | orchestrator | 2025-09-18 10:36:41 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:41.209483 | orchestrator | 2025-09-18 10:36:41 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:41.210134 | orchestrator | 2025-09-18 10:36:41 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:41.211023 | orchestrator | 2025-09-18 10:36:41 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:41.211518 | orchestrator | 2025-09-18 10:36:41 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:41.211545 | orchestrator | 2025-09-18 10:36:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:44.311477 | orchestrator | 2025-09-18 10:36:44 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:44.311545 | orchestrator | 2025-09-18 10:36:44 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:44.311552 | orchestrator | 2025-09-18 10:36:44 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:44.311557 | orchestrator | 2025-09-18 10:36:44 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:44.311562 | orchestrator | 2025-09-18 10:36:44 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:44.311567 | orchestrator | 2025-09-18 10:36:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:47.390668 | orchestrator | 2025-09-18 10:36:47 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:47.391556 | orchestrator | 2025-09-18 10:36:47 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:47.393906 | orchestrator | 2025-09-18 10:36:47 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:47.396012 | orchestrator | 2025-09-18 10:36:47 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:47.397245 | orchestrator | 2025-09-18 10:36:47 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:47.397277 | orchestrator | 2025-09-18 10:36:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:50.467780 | orchestrator | 2025-09-18 10:36:50 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:50.467868 | orchestrator | 2025-09-18 10:36:50 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:50.468955 | orchestrator | 2025-09-18 10:36:50 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:50.472439 | orchestrator | 2025-09-18 10:36:50 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:50.473333 | orchestrator | 2025-09-18 10:36:50 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:50.473655 | orchestrator | 2025-09-18 10:36:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:53.585064 | orchestrator | 2025-09-18 10:36:53 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state STARTED 2025-09-18 10:36:53.585424 | orchestrator | 2025-09-18 10:36:53 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:53.587419 | orchestrator | 2025-09-18 10:36:53 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:53.590290 | orchestrator | 2025-09-18 10:36:53 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:53.592247 | orchestrator | 2025-09-18 10:36:53 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state STARTED 2025-09-18 10:36:53.592283 | orchestrator | 2025-09-18 10:36:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:56.655772 | orchestrator | 2025-09-18 10:36:56.655880 | orchestrator | 2025-09-18 10:36:56.655971 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-09-18 10:36:56.656165 | orchestrator | 2025-09-18 10:36:56.656185 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-09-18 10:36:56.656197 | orchestrator | Thursday 18 September 2025 10:35:40 +0000 (0:00:00.913) 0:00:00.913 **** 2025-09-18 10:36:56.656208 | orchestrator | ok: [testbed-manager] => { 2025-09-18 10:36:56.656222 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-09-18 10:36:56.656235 | orchestrator | } 2025-09-18 10:36:56.656247 | orchestrator | 2025-09-18 10:36:56.656258 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-09-18 10:36:56.656269 | orchestrator | Thursday 18 September 2025 10:35:41 +0000 (0:00:00.599) 0:00:01.513 **** 2025-09-18 10:36:56.656280 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.656292 | orchestrator | 2025-09-18 10:36:56.656303 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-09-18 10:36:56.656314 | orchestrator | Thursday 18 September 2025 10:35:43 +0000 (0:00:02.320) 0:00:03.833 **** 2025-09-18 10:36:56.656325 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-09-18 10:36:56.656336 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-09-18 10:36:56.656347 | orchestrator | 2025-09-18 10:36:56.656358 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-09-18 10:36:56.656369 | orchestrator | Thursday 18 September 2025 10:35:44 +0000 (0:00:01.362) 0:00:05.196 **** 2025-09-18 10:36:56.656380 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.656391 | orchestrator | 2025-09-18 10:36:56.656402 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-09-18 10:36:56.656413 | orchestrator | Thursday 18 September 2025 10:35:47 +0000 (0:00:02.909) 0:00:08.106 **** 2025-09-18 10:36:56.656424 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.656435 | orchestrator | 2025-09-18 10:36:56.656446 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-09-18 10:36:56.656457 | orchestrator | Thursday 18 September 2025 10:35:50 +0000 (0:00:03.156) 0:00:11.262 **** 2025-09-18 10:36:56.656468 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-09-18 10:36:56.656478 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.656489 | orchestrator | 2025-09-18 10:36:56.656500 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-09-18 10:36:56.656518 | orchestrator | Thursday 18 September 2025 10:36:18 +0000 (0:00:28.013) 0:00:39.276 **** 2025-09-18 10:36:56.656543 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.656570 | orchestrator | 2025-09-18 10:36:56.656589 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:36:56.656610 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.656631 | orchestrator | 2025-09-18 10:36:56.656651 | orchestrator | 2025-09-18 10:36:56.656670 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:36:56.656690 | orchestrator | Thursday 18 September 2025 10:36:22 +0000 (0:00:03.209) 0:00:42.485 **** 2025-09-18 10:36:56.656711 | orchestrator | =============================================================================== 2025-09-18 10:36:56.656731 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 28.01s 2025-09-18 10:36:56.656752 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 3.21s 2025-09-18 10:36:56.656789 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 3.16s 2025-09-18 10:36:56.656802 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.91s 2025-09-18 10:36:56.656814 | orchestrator | osism.services.homer : Create traefik external network ------------------ 2.32s 2025-09-18 10:36:56.656827 | orchestrator | osism.services.homer : Create required directories ---------------------- 1.36s 2025-09-18 10:36:56.656839 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.60s 2025-09-18 10:36:56.656851 | orchestrator | 2025-09-18 10:36:56.656863 | orchestrator | 2025-09-18 10:36:56.656875 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-09-18 10:36:56.656888 | orchestrator | 2025-09-18 10:36:56.656936 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-09-18 10:36:56.656948 | orchestrator | Thursday 18 September 2025 10:35:39 +0000 (0:00:00.504) 0:00:00.504 **** 2025-09-18 10:36:56.656959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-09-18 10:36:56.656972 | orchestrator | 2025-09-18 10:36:56.656982 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-09-18 10:36:56.657045 | orchestrator | Thursday 18 September 2025 10:35:40 +0000 (0:00:00.410) 0:00:00.914 **** 2025-09-18 10:36:56.657058 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-09-18 10:36:56.657069 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-09-18 10:36:56.657080 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-09-18 10:36:56.657091 | orchestrator | 2025-09-18 10:36:56.657102 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-09-18 10:36:56.657113 | orchestrator | Thursday 18 September 2025 10:35:42 +0000 (0:00:02.075) 0:00:02.989 **** 2025-09-18 10:36:56.657125 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657135 | orchestrator | 2025-09-18 10:36:56.657146 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-09-18 10:36:56.657157 | orchestrator | Thursday 18 September 2025 10:35:43 +0000 (0:00:01.876) 0:00:04.866 **** 2025-09-18 10:36:56.657190 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-09-18 10:36:56.657202 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.657213 | orchestrator | 2025-09-18 10:36:56.657224 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-09-18 10:36:56.657235 | orchestrator | Thursday 18 September 2025 10:36:17 +0000 (0:00:33.368) 0:00:38.235 **** 2025-09-18 10:36:56.657245 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657256 | orchestrator | 2025-09-18 10:36:56.657267 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-09-18 10:36:56.657278 | orchestrator | Thursday 18 September 2025 10:36:19 +0000 (0:00:01.712) 0:00:39.947 **** 2025-09-18 10:36:56.657289 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.657300 | orchestrator | 2025-09-18 10:36:56.657311 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-09-18 10:36:56.657327 | orchestrator | Thursday 18 September 2025 10:36:20 +0000 (0:00:01.130) 0:00:41.078 **** 2025-09-18 10:36:56.657338 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657348 | orchestrator | 2025-09-18 10:36:56.657359 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-09-18 10:36:56.657370 | orchestrator | Thursday 18 September 2025 10:36:22 +0000 (0:00:02.325) 0:00:43.403 **** 2025-09-18 10:36:56.657381 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657392 | orchestrator | 2025-09-18 10:36:56.657402 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-09-18 10:36:56.657413 | orchestrator | Thursday 18 September 2025 10:36:23 +0000 (0:00:01.104) 0:00:44.508 **** 2025-09-18 10:36:56.657424 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657444 | orchestrator | 2025-09-18 10:36:56.657455 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-09-18 10:36:56.657466 | orchestrator | Thursday 18 September 2025 10:36:24 +0000 (0:00:00.782) 0:00:45.291 **** 2025-09-18 10:36:56.657477 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.657488 | orchestrator | 2025-09-18 10:36:56.657499 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:36:56.657510 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.657520 | orchestrator | 2025-09-18 10:36:56.657531 | orchestrator | 2025-09-18 10:36:56.657542 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:36:56.657553 | orchestrator | Thursday 18 September 2025 10:36:25 +0000 (0:00:00.861) 0:00:46.152 **** 2025-09-18 10:36:56.657563 | orchestrator | =============================================================================== 2025-09-18 10:36:56.657574 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 33.37s 2025-09-18 10:36:56.657585 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.33s 2025-09-18 10:36:56.657596 | orchestrator | osism.services.openstackclient : Create required directories ------------ 2.08s 2025-09-18 10:36:56.657607 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.88s 2025-09-18 10:36:56.657618 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.71s 2025-09-18 10:36:56.657628 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.13s 2025-09-18 10:36:56.657639 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.10s 2025-09-18 10:36:56.657650 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.86s 2025-09-18 10:36:56.657661 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.78s 2025-09-18 10:36:56.657672 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.41s 2025-09-18 10:36:56.657683 | orchestrator | 2025-09-18 10:36:56.657694 | orchestrator | 2025-09-18 10:36:56.657704 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-09-18 10:36:56.657715 | orchestrator | 2025-09-18 10:36:56.657726 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-09-18 10:36:56.657737 | orchestrator | Thursday 18 September 2025 10:35:59 +0000 (0:00:00.242) 0:00:00.242 **** 2025-09-18 10:36:56.657748 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.657758 | orchestrator | 2025-09-18 10:36:56.657769 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-09-18 10:36:56.657780 | orchestrator | Thursday 18 September 2025 10:36:01 +0000 (0:00:01.337) 0:00:01.580 **** 2025-09-18 10:36:56.657791 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-09-18 10:36:56.657802 | orchestrator | 2025-09-18 10:36:56.657813 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-09-18 10:36:56.657824 | orchestrator | Thursday 18 September 2025 10:36:01 +0000 (0:00:00.593) 0:00:02.174 **** 2025-09-18 10:36:56.657834 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657845 | orchestrator | 2025-09-18 10:36:56.657856 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-09-18 10:36:56.657867 | orchestrator | Thursday 18 September 2025 10:36:02 +0000 (0:00:01.303) 0:00:03.477 **** 2025-09-18 10:36:56.657877 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-09-18 10:36:56.657888 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.657899 | orchestrator | 2025-09-18 10:36:56.657926 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-09-18 10:36:56.657937 | orchestrator | Thursday 18 September 2025 10:36:50 +0000 (0:00:47.961) 0:00:51.439 **** 2025-09-18 10:36:56.657948 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.657959 | orchestrator | 2025-09-18 10:36:56.657978 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:36:56.657989 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.658000 | orchestrator | 2025-09-18 10:36:56.658011 | orchestrator | 2025-09-18 10:36:56.658116 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:36:56.658139 | orchestrator | Thursday 18 September 2025 10:36:55 +0000 (0:00:04.364) 0:00:55.804 **** 2025-09-18 10:36:56.658150 | orchestrator | =============================================================================== 2025-09-18 10:36:56.658161 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 47.96s 2025-09-18 10:36:56.658172 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 4.37s 2025-09-18 10:36:56.658183 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 1.34s 2025-09-18 10:36:56.658194 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.30s 2025-09-18 10:36:56.658205 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.59s 2025-09-18 10:36:56.658223 | orchestrator | 2025-09-18 10:36:56 | INFO  | Task fbce966f-a83c-4453-b910-99161a8ea7a4 is in state SUCCESS 2025-09-18 10:36:56.658234 | orchestrator | 2025-09-18 10:36:56 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:56.658429 | orchestrator | 2025-09-18 10:36:56 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:56.659529 | orchestrator | 2025-09-18 10:36:56 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:56.660605 | orchestrator | 2025-09-18 10:36:56 | INFO  | Task 09c226f2-97af-4fc1-b5ed-6d6a7c0fc8bf is in state SUCCESS 2025-09-18 10:36:56.661098 | orchestrator | 2025-09-18 10:36:56.661120 | orchestrator | 2025-09-18 10:36:56.661131 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:36:56.661141 | orchestrator | 2025-09-18 10:36:56.661150 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:36:56.661160 | orchestrator | Thursday 18 September 2025 10:35:39 +0000 (0:00:00.406) 0:00:00.406 **** 2025-09-18 10:36:56.661171 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-09-18 10:36:56.661180 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-09-18 10:36:56.661190 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-09-18 10:36:56.661199 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-09-18 10:36:56.661209 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-09-18 10:36:56.661219 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-09-18 10:36:56.661228 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-09-18 10:36:56.661238 | orchestrator | 2025-09-18 10:36:56.661247 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-09-18 10:36:56.661257 | orchestrator | 2025-09-18 10:36:56.661266 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-09-18 10:36:56.661276 | orchestrator | Thursday 18 September 2025 10:35:41 +0000 (0:00:02.096) 0:00:02.502 **** 2025-09-18 10:36:56.661300 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:36:56.661312 | orchestrator | 2025-09-18 10:36:56.661322 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-09-18 10:36:56.661332 | orchestrator | Thursday 18 September 2025 10:35:43 +0000 (0:00:01.825) 0:00:04.327 **** 2025-09-18 10:36:56.661342 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:36:56.661352 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:36:56.661376 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.661386 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:36:56.661395 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:36:56.661405 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:36:56.661415 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:36:56.661424 | orchestrator | 2025-09-18 10:36:56.661434 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-09-18 10:36:56.661444 | orchestrator | Thursday 18 September 2025 10:35:45 +0000 (0:00:01.971) 0:00:06.299 **** 2025-09-18 10:36:56.661454 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:36:56.661463 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:36:56.661473 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:36:56.661482 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:36:56.661492 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:36:56.661502 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.661511 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:36:56.661521 | orchestrator | 2025-09-18 10:36:56.661531 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-09-18 10:36:56.661540 | orchestrator | Thursday 18 September 2025 10:35:48 +0000 (0:00:03.116) 0:00:09.415 **** 2025-09-18 10:36:56.661550 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:36:56.661560 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:36:56.661570 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:36:56.661580 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:36:56.661590 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:36:56.661599 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.661609 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:36:56.661619 | orchestrator | 2025-09-18 10:36:56.661628 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-09-18 10:36:56.661638 | orchestrator | Thursday 18 September 2025 10:35:51 +0000 (0:00:02.865) 0:00:12.281 **** 2025-09-18 10:36:56.661648 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:36:56.661658 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:36:56.661667 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:36:56.661677 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:36:56.661686 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:36:56.661696 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:36:56.661705 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.661716 | orchestrator | 2025-09-18 10:36:56.661727 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-09-18 10:36:56.661737 | orchestrator | Thursday 18 September 2025 10:36:05 +0000 (0:00:14.027) 0:00:26.308 **** 2025-09-18 10:36:56.661748 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:36:56.661759 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:36:56.661770 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:36:56.661780 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:36:56.661791 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:36:56.661802 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:36:56.661812 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.661823 | orchestrator | 2025-09-18 10:36:56.661833 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-09-18 10:36:56.661844 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:25.564) 0:00:51.873 **** 2025-09-18 10:36:56.661862 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:36:56.661875 | orchestrator | 2025-09-18 10:36:56.661886 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-09-18 10:36:56.661897 | orchestrator | Thursday 18 September 2025 10:36:32 +0000 (0:00:01.258) 0:00:53.131 **** 2025-09-18 10:36:56.661930 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-09-18 10:36:56.661942 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-09-18 10:36:56.661954 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-09-18 10:36:56.661972 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-09-18 10:36:56.661991 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-09-18 10:36:56.662003 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-09-18 10:36:56.662013 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-09-18 10:36:56.662064 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-09-18 10:36:56.662081 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-09-18 10:36:56.662099 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-09-18 10:36:56.662122 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-09-18 10:36:56.662142 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-09-18 10:36:56.662158 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-09-18 10:36:56.662176 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-09-18 10:36:56.662193 | orchestrator | 2025-09-18 10:36:56.662208 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-09-18 10:36:56.662224 | orchestrator | Thursday 18 September 2025 10:36:39 +0000 (0:00:07.241) 0:01:00.373 **** 2025-09-18 10:36:56.662242 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.662261 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:36:56.662280 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:36:56.662299 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:36:56.662317 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:36:56.662335 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:36:56.662353 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:36:56.662371 | orchestrator | 2025-09-18 10:36:56.662390 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-09-18 10:36:56.662407 | orchestrator | Thursday 18 September 2025 10:36:41 +0000 (0:00:01.463) 0:01:01.837 **** 2025-09-18 10:36:56.662424 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:36:56.662442 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:36:56.662461 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:36:56.662479 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.662497 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:36:56.662516 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:36:56.662534 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:36:56.662554 | orchestrator | 2025-09-18 10:36:56.662571 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-09-18 10:36:56.662587 | orchestrator | Thursday 18 September 2025 10:36:42 +0000 (0:00:01.518) 0:01:03.355 **** 2025-09-18 10:36:56.662597 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.662606 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:36:56.662616 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:36:56.662626 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:36:56.662635 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:36:56.662645 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:36:56.662654 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:36:56.662664 | orchestrator | 2025-09-18 10:36:56.662673 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-09-18 10:36:56.662683 | orchestrator | Thursday 18 September 2025 10:36:43 +0000 (0:00:01.174) 0:01:04.530 **** 2025-09-18 10:36:56.662693 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:36:56.662702 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:36:56.662711 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:36:56.662721 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:36:56.662730 | orchestrator | ok: [testbed-manager] 2025-09-18 10:36:56.662740 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:36:56.662749 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:36:56.662759 | orchestrator | 2025-09-18 10:36:56.662769 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-09-18 10:36:56.662778 | orchestrator | Thursday 18 September 2025 10:36:45 +0000 (0:00:01.982) 0:01:06.513 **** 2025-09-18 10:36:56.662788 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-09-18 10:36:56.662810 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:36:56.662820 | orchestrator | 2025-09-18 10:36:56.662830 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-09-18 10:36:56.662840 | orchestrator | Thursday 18 September 2025 10:36:47 +0000 (0:00:01.523) 0:01:08.037 **** 2025-09-18 10:36:56.662849 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.662859 | orchestrator | 2025-09-18 10:36:56.662868 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-09-18 10:36:56.662878 | orchestrator | Thursday 18 September 2025 10:36:49 +0000 (0:00:02.253) 0:01:10.291 **** 2025-09-18 10:36:56.662887 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:36:56.662897 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:36:56.662971 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:36:56.662982 | orchestrator | changed: [testbed-manager] 2025-09-18 10:36:56.662991 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:36:56.663001 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:36:56.663010 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:36:56.663019 | orchestrator | 2025-09-18 10:36:56.663027 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:36:56.663035 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663049 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663058 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663066 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663081 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663089 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663097 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:36:56.663105 | orchestrator | 2025-09-18 10:36:56.663113 | orchestrator | 2025-09-18 10:36:56.663121 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:36:56.663129 | orchestrator | Thursday 18 September 2025 10:36:53 +0000 (0:00:03.993) 0:01:14.285 **** 2025-09-18 10:36:56.663137 | orchestrator | =============================================================================== 2025-09-18 10:36:56.663146 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 25.56s 2025-09-18 10:36:56.663154 | orchestrator | osism.services.netdata : Add repository -------------------------------- 14.03s 2025-09-18 10:36:56.663161 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 7.24s 2025-09-18 10:36:56.663169 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.99s 2025-09-18 10:36:56.663177 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 3.12s 2025-09-18 10:36:56.663185 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.87s 2025-09-18 10:36:56.663193 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.25s 2025-09-18 10:36:56.663201 | orchestrator | Group hosts based on enabled services ----------------------------------- 2.10s 2025-09-18 10:36:56.663209 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 1.98s 2025-09-18 10:36:56.663223 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 1.97s 2025-09-18 10:36:56.663231 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.83s 2025-09-18 10:36:56.663239 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.52s 2025-09-18 10:36:56.663246 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 1.52s 2025-09-18 10:36:56.663254 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.46s 2025-09-18 10:36:56.663262 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 1.26s 2025-09-18 10:36:56.663270 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.18s 2025-09-18 10:36:56.663278 | orchestrator | 2025-09-18 10:36:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:36:59.705477 | orchestrator | 2025-09-18 10:36:59 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:36:59.707463 | orchestrator | 2025-09-18 10:36:59 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:36:59.709572 | orchestrator | 2025-09-18 10:36:59 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:36:59.709650 | orchestrator | 2025-09-18 10:36:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:02.764566 | orchestrator | 2025-09-18 10:37:02 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:02.766342 | orchestrator | 2025-09-18 10:37:02 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:02.767126 | orchestrator | 2025-09-18 10:37:02 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:02.767514 | orchestrator | 2025-09-18 10:37:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:05.816409 | orchestrator | 2025-09-18 10:37:05 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:05.816919 | orchestrator | 2025-09-18 10:37:05 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:05.818139 | orchestrator | 2025-09-18 10:37:05 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:05.818245 | orchestrator | 2025-09-18 10:37:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:08.869294 | orchestrator | 2025-09-18 10:37:08 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:08.870578 | orchestrator | 2025-09-18 10:37:08 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:08.872244 | orchestrator | 2025-09-18 10:37:08 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:08.872264 | orchestrator | 2025-09-18 10:37:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:11.957074 | orchestrator | 2025-09-18 10:37:11 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:11.957163 | orchestrator | 2025-09-18 10:37:11 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:11.957178 | orchestrator | 2025-09-18 10:37:11 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:11.957191 | orchestrator | 2025-09-18 10:37:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:14.969115 | orchestrator | 2025-09-18 10:37:14 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:14.969967 | orchestrator | 2025-09-18 10:37:14 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:14.970602 | orchestrator | 2025-09-18 10:37:14 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:14.970632 | orchestrator | 2025-09-18 10:37:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:18.013994 | orchestrator | 2025-09-18 10:37:18 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:18.015093 | orchestrator | 2025-09-18 10:37:18 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:18.017124 | orchestrator | 2025-09-18 10:37:18 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:18.017153 | orchestrator | 2025-09-18 10:37:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:21.060328 | orchestrator | 2025-09-18 10:37:21 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:21.060795 | orchestrator | 2025-09-18 10:37:21 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:21.060830 | orchestrator | 2025-09-18 10:37:21 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:21.060843 | orchestrator | 2025-09-18 10:37:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:24.111376 | orchestrator | 2025-09-18 10:37:24 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:24.113810 | orchestrator | 2025-09-18 10:37:24 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:24.115318 | orchestrator | 2025-09-18 10:37:24 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:24.115365 | orchestrator | 2025-09-18 10:37:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:27.153110 | orchestrator | 2025-09-18 10:37:27 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:27.155822 | orchestrator | 2025-09-18 10:37:27 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:27.157670 | orchestrator | 2025-09-18 10:37:27 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:27.157687 | orchestrator | 2025-09-18 10:37:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:30.215817 | orchestrator | 2025-09-18 10:37:30 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:30.226107 | orchestrator | 2025-09-18 10:37:30 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:30.226959 | orchestrator | 2025-09-18 10:37:30 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:30.226996 | orchestrator | 2025-09-18 10:37:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:33.271227 | orchestrator | 2025-09-18 10:37:33 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:33.271438 | orchestrator | 2025-09-18 10:37:33 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:33.272532 | orchestrator | 2025-09-18 10:37:33 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:33.272559 | orchestrator | 2025-09-18 10:37:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:36.335421 | orchestrator | 2025-09-18 10:37:36 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:36.336792 | orchestrator | 2025-09-18 10:37:36 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:36.339099 | orchestrator | 2025-09-18 10:37:36 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:36.339514 | orchestrator | 2025-09-18 10:37:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:39.386911 | orchestrator | 2025-09-18 10:37:39 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:39.388524 | orchestrator | 2025-09-18 10:37:39 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:39.390346 | orchestrator | 2025-09-18 10:37:39 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:39.390391 | orchestrator | 2025-09-18 10:37:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:42.441290 | orchestrator | 2025-09-18 10:37:42 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:42.444314 | orchestrator | 2025-09-18 10:37:42 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:42.446752 | orchestrator | 2025-09-18 10:37:42 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:42.446894 | orchestrator | 2025-09-18 10:37:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:45.493206 | orchestrator | 2025-09-18 10:37:45 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:45.494359 | orchestrator | 2025-09-18 10:37:45 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:45.498109 | orchestrator | 2025-09-18 10:37:45 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:45.498138 | orchestrator | 2025-09-18 10:37:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:48.540162 | orchestrator | 2025-09-18 10:37:48 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:48.541861 | orchestrator | 2025-09-18 10:37:48 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:48.541893 | orchestrator | 2025-09-18 10:37:48 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:48.541906 | orchestrator | 2025-09-18 10:37:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:51.580379 | orchestrator | 2025-09-18 10:37:51 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:51.580468 | orchestrator | 2025-09-18 10:37:51 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:51.580644 | orchestrator | 2025-09-18 10:37:51 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:51.580665 | orchestrator | 2025-09-18 10:37:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:54.619315 | orchestrator | 2025-09-18 10:37:54 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:54.619880 | orchestrator | 2025-09-18 10:37:54 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:54.621190 | orchestrator | 2025-09-18 10:37:54 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:54.621214 | orchestrator | 2025-09-18 10:37:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:37:57.660561 | orchestrator | 2025-09-18 10:37:57 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state STARTED 2025-09-18 10:37:57.661946 | orchestrator | 2025-09-18 10:37:57 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:37:57.663430 | orchestrator | 2025-09-18 10:37:57 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:37:57.663480 | orchestrator | 2025-09-18 10:37:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:00.723202 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:00.723733 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state STARTED 2025-09-18 10:38:00.727741 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task 93836b1a-d2ad-4a32-9727-017fcf21993f is in state SUCCESS 2025-09-18 10:38:00.734718 | orchestrator | 2025-09-18 10:38:00.734812 | orchestrator | 2025-09-18 10:38:00.734932 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-09-18 10:38:00.734954 | orchestrator | 2025-09-18 10:38:00.734974 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-18 10:38:00.734994 | orchestrator | Thursday 18 September 2025 10:35:32 +0000 (0:00:00.277) 0:00:00.277 **** 2025-09-18 10:38:00.735021 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:38:00.735042 | orchestrator | 2025-09-18 10:38:00.735062 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-09-18 10:38:00.735081 | orchestrator | Thursday 18 September 2025 10:35:34 +0000 (0:00:01.357) 0:00:01.634 **** 2025-09-18 10:38:00.735100 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735119 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735137 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735156 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735176 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735194 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735211 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735226 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735242 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735259 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735277 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-09-18 10:38:00.735294 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735311 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735330 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735346 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735364 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735381 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735399 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-09-18 10:38:00.735417 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735435 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735452 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-09-18 10:38:00.735470 | orchestrator | 2025-09-18 10:38:00.735487 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-09-18 10:38:00.735531 | orchestrator | Thursday 18 September 2025 10:35:37 +0000 (0:00:03.873) 0:00:05.508 **** 2025-09-18 10:38:00.735550 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:38:00.735570 | orchestrator | 2025-09-18 10:38:00.735586 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-09-18 10:38:00.735603 | orchestrator | Thursday 18 September 2025 10:35:39 +0000 (0:00:01.232) 0:00:06.740 **** 2025-09-18 10:38:00.735624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735646 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735701 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735720 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735737 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735755 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735772 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.735802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735884 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735909 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735944 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735971 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.735989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736021 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736037 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736063 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736086 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736104 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736120 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.736137 | orchestrator | 2025-09-18 10:38:00.736154 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-09-18 10:38:00.736171 | orchestrator | Thursday 18 September 2025 10:35:43 +0000 (0:00:04.767) 0:00:11.508 **** 2025-09-18 10:38:00.736200 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736217 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736233 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736249 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:38:00.736267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736299 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736317 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736334 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:38:00.736351 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736369 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736467 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:38:00.736511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736545 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736572 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:38:00.736588 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:38:00.736605 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736622 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736640 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736656 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:38:00.736672 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736698 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736717 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736734 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:38:00.736751 | orchestrator | 2025-09-18 10:38:00.736768 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-09-18 10:38:00.736784 | orchestrator | Thursday 18 September 2025 10:35:45 +0000 (0:00:01.928) 0:00:13.436 **** 2025-09-18 10:38:00.736801 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736854 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736873 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736889 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:38:00.736905 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.736978 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.736995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737039 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:38:00.737057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.737075 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737109 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:38:00.737126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.737151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737200 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:38:00.737215 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:38:00.737231 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.737249 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737283 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:38:00.737300 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-09-18 10:38:00.737317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.737351 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:38:00.737368 | orchestrator | 2025-09-18 10:38:00.737385 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-09-18 10:38:00.737401 | orchestrator | Thursday 18 September 2025 10:35:49 +0000 (0:00:03.131) 0:00:16.568 **** 2025-09-18 10:38:00.737418 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:38:00.737432 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:38:00.737459 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:38:00.737476 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:38:00.737492 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:38:00.737516 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:38:00.737533 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:38:00.737548 | orchestrator | 2025-09-18 10:38:00.737564 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-09-18 10:38:00.737580 | orchestrator | Thursday 18 September 2025 10:35:50 +0000 (0:00:01.578) 0:00:18.146 **** 2025-09-18 10:38:00.737595 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:38:00.737611 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:38:00.737634 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:38:00.737650 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:38:00.737703 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:38:00.737721 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:38:00.737737 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:38:00.737752 | orchestrator | 2025-09-18 10:38:00.737769 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-09-18 10:38:00.737785 | orchestrator | Thursday 18 September 2025 10:35:52 +0000 (0:00:01.581) 0:00:19.727 **** 2025-09-18 10:38:00.737802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737911 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737926 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737940 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737953 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.737994 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738078 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.738092 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738144 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738209 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738222 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738237 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738268 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738292 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.738307 | orchestrator | 2025-09-18 10:38:00.738323 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-09-18 10:38:00.738338 | orchestrator | Thursday 18 September 2025 10:35:58 +0000 (0:00:06.027) 0:00:25.755 **** 2025-09-18 10:38:00.738353 | orchestrator | [WARNING]: Skipped 2025-09-18 10:38:00.738369 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-09-18 10:38:00.738384 | orchestrator | to this access issue: 2025-09-18 10:38:00.738400 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-09-18 10:38:00.738415 | orchestrator | directory 2025-09-18 10:38:00.738432 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:38:00.738448 | orchestrator | 2025-09-18 10:38:00.738464 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-09-18 10:38:00.738481 | orchestrator | Thursday 18 September 2025 10:35:59 +0000 (0:00:00.969) 0:00:26.724 **** 2025-09-18 10:38:00.738497 | orchestrator | [WARNING]: Skipped 2025-09-18 10:38:00.738513 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-09-18 10:38:00.738535 | orchestrator | to this access issue: 2025-09-18 10:38:00.738551 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-09-18 10:38:00.738566 | orchestrator | directory 2025-09-18 10:38:00.738583 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:38:00.738599 | orchestrator | 2025-09-18 10:38:00.738651 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-09-18 10:38:00.738680 | orchestrator | Thursday 18 September 2025 10:36:00 +0000 (0:00:00.973) 0:00:27.698 **** 2025-09-18 10:38:00.738696 | orchestrator | [WARNING]: Skipped 2025-09-18 10:38:00.738710 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-09-18 10:38:00.738725 | orchestrator | to this access issue: 2025-09-18 10:38:00.738740 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-09-18 10:38:00.738754 | orchestrator | directory 2025-09-18 10:38:00.738769 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:38:00.738784 | orchestrator | 2025-09-18 10:38:00.738799 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-09-18 10:38:00.738855 | orchestrator | Thursday 18 September 2025 10:36:00 +0000 (0:00:00.790) 0:00:28.488 **** 2025-09-18 10:38:00.738870 | orchestrator | [WARNING]: Skipped 2025-09-18 10:38:00.738883 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-09-18 10:38:00.738896 | orchestrator | to this access issue: 2025-09-18 10:38:00.738909 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-09-18 10:38:00.738923 | orchestrator | directory 2025-09-18 10:38:00.738936 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:38:00.739021 | orchestrator | 2025-09-18 10:38:00.739039 | orchestrator | TASK [common : Copying over fluentd.conf] ************************************** 2025-09-18 10:38:00.739052 | orchestrator | Thursday 18 September 2025 10:36:01 +0000 (0:00:00.919) 0:00:29.408 **** 2025-09-18 10:38:00.739065 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.739077 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.739091 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.739104 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.739116 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.739129 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.739141 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.739167 | orchestrator | 2025-09-18 10:38:00.739180 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-09-18 10:38:00.739194 | orchestrator | Thursday 18 September 2025 10:36:05 +0000 (0:00:04.055) 0:00:33.463 **** 2025-09-18 10:38:00.739206 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739220 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739233 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739246 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739259 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739271 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739285 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-09-18 10:38:00.739297 | orchestrator | 2025-09-18 10:38:00.739309 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-09-18 10:38:00.739322 | orchestrator | Thursday 18 September 2025 10:36:09 +0000 (0:00:03.507) 0:00:36.970 **** 2025-09-18 10:38:00.739335 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.739348 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.739361 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.739373 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.739386 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.739399 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.739411 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.739424 | orchestrator | 2025-09-18 10:38:00.739437 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-09-18 10:38:00.739450 | orchestrator | Thursday 18 September 2025 10:36:12 +0000 (0:00:03.284) 0:00:40.255 **** 2025-09-18 10:38:00.739465 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739492 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739509 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739540 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739569 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739582 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739596 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739655 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739671 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739685 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739699 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739709 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739738 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.739750 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:38:00.739765 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739773 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.739781 | orchestrator | 2025-09-18 10:38:00.739790 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-09-18 10:38:00.739797 | orchestrator | Thursday 18 September 2025 10:36:16 +0000 (0:00:03.385) 0:00:43.641 **** 2025-09-18 10:38:00.739805 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739836 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739849 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739857 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739865 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739874 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739881 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-09-18 10:38:00.739889 | orchestrator | 2025-09-18 10:38:00.739897 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-09-18 10:38:00.739905 | orchestrator | Thursday 18 September 2025 10:36:20 +0000 (0:00:04.427) 0:00:48.069 **** 2025-09-18 10:38:00.739913 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739921 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739929 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739937 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739944 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739952 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739960 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-09-18 10:38:00.739968 | orchestrator | 2025-09-18 10:38:00.739976 | orchestrator | TASK [common : Check common containers] **************************************** 2025-09-18 10:38:00.739984 | orchestrator | Thursday 18 September 2025 10:36:23 +0000 (0:00:03.134) 0:00:51.203 **** 2025-09-18 10:38:00.739992 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740016 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740025 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740033 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740042 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740050 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740059 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740067 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740090 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740102 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/fluentd:2024.2', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-09-18 10:38:00.740111 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740121 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740138 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740161 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740174 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/kolla-toolbox:2024.2', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740186 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740195 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740205 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740219 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/cron:2024.2', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:38:00.740233 | orchestrator | 2025-09-18 10:38:00.740245 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-09-18 10:38:00.740258 | orchestrator | Thursday 18 September 2025 10:36:27 +0000 (0:00:04.179) 0:00:55.383 **** 2025-09-18 10:38:00.740272 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.740286 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.740299 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.740313 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.740322 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.740330 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.740338 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.740346 | orchestrator | 2025-09-18 10:38:00.740354 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-09-18 10:38:00.740362 | orchestrator | Thursday 18 September 2025 10:36:30 +0000 (0:00:02.518) 0:00:57.902 **** 2025-09-18 10:38:00.740370 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.740378 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.740393 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.740401 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.740409 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.740417 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.740424 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.740432 | orchestrator | 2025-09-18 10:38:00.740445 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740457 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:01.199) 0:00:59.102 **** 2025-09-18 10:38:00.740471 | orchestrator | 2025-09-18 10:38:00.740483 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740496 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:00.065) 0:00:59.167 **** 2025-09-18 10:38:00.740508 | orchestrator | 2025-09-18 10:38:00.740521 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740534 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:00.058) 0:00:59.226 **** 2025-09-18 10:38:00.740546 | orchestrator | 2025-09-18 10:38:00.740560 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740573 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:00.061) 0:00:59.287 **** 2025-09-18 10:38:00.740587 | orchestrator | 2025-09-18 10:38:00.740601 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740615 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:00.201) 0:00:59.488 **** 2025-09-18 10:38:00.740627 | orchestrator | 2025-09-18 10:38:00.740641 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740650 | orchestrator | Thursday 18 September 2025 10:36:32 +0000 (0:00:00.070) 0:00:59.559 **** 2025-09-18 10:38:00.740658 | orchestrator | 2025-09-18 10:38:00.740666 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-09-18 10:38:00.740674 | orchestrator | Thursday 18 September 2025 10:36:32 +0000 (0:00:00.084) 0:00:59.643 **** 2025-09-18 10:38:00.740682 | orchestrator | 2025-09-18 10:38:00.740689 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-09-18 10:38:00.740704 | orchestrator | Thursday 18 September 2025 10:36:32 +0000 (0:00:00.080) 0:00:59.724 **** 2025-09-18 10:38:00.740712 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.740720 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.740728 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.740736 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.740743 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.740751 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.740759 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.740767 | orchestrator | 2025-09-18 10:38:00.740783 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-09-18 10:38:00.740791 | orchestrator | Thursday 18 September 2025 10:37:09 +0000 (0:00:37.376) 0:01:37.101 **** 2025-09-18 10:38:00.740799 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.740807 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.740869 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.740880 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.740888 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.740943 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.740953 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.740961 | orchestrator | 2025-09-18 10:38:00.740969 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-09-18 10:38:00.740977 | orchestrator | Thursday 18 September 2025 10:37:45 +0000 (0:00:36.253) 0:02:13.355 **** 2025-09-18 10:38:00.740986 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:38:00.740994 | orchestrator | ok: [testbed-manager] 2025-09-18 10:38:00.741002 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:38:00.741010 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:38:00.741018 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:38:00.741026 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:38:00.741042 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:38:00.741050 | orchestrator | 2025-09-18 10:38:00.741058 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-09-18 10:38:00.741066 | orchestrator | Thursday 18 September 2025 10:37:48 +0000 (0:00:02.239) 0:02:15.594 **** 2025-09-18 10:38:00.741074 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:00.741082 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:38:00.741090 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:00.741098 | orchestrator | changed: [testbed-manager] 2025-09-18 10:38:00.741105 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:00.741113 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:38:00.741121 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:38:00.741129 | orchestrator | 2025-09-18 10:38:00.741137 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:38:00.741145 | orchestrator | testbed-manager : ok=22  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741154 | orchestrator | testbed-node-0 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741162 | orchestrator | testbed-node-1 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741171 | orchestrator | testbed-node-2 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741185 | orchestrator | testbed-node-3 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741200 | orchestrator | testbed-node-4 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741214 | orchestrator | testbed-node-5 : ok=18  changed=14  unreachable=0 failed=0 skipped=4  rescued=0 ignored=0 2025-09-18 10:38:00.741229 | orchestrator | 2025-09-18 10:38:00.741243 | orchestrator | 2025-09-18 10:38:00.741258 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:38:00.741273 | orchestrator | Thursday 18 September 2025 10:37:57 +0000 (0:00:09.510) 0:02:25.104 **** 2025-09-18 10:38:00.741288 | orchestrator | =============================================================================== 2025-09-18 10:38:00.741303 | orchestrator | common : Restart fluentd container ------------------------------------- 37.38s 2025-09-18 10:38:00.741318 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 36.25s 2025-09-18 10:38:00.741332 | orchestrator | common : Restart cron container ----------------------------------------- 9.51s 2025-09-18 10:38:00.741346 | orchestrator | common : Copying over config.json files for services -------------------- 6.03s 2025-09-18 10:38:00.741360 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.77s 2025-09-18 10:38:00.741375 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 4.43s 2025-09-18 10:38:00.741389 | orchestrator | common : Check common containers ---------------------------------------- 4.18s 2025-09-18 10:38:00.741401 | orchestrator | common : Copying over fluentd.conf -------------------------------------- 4.06s 2025-09-18 10:38:00.741413 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.87s 2025-09-18 10:38:00.741426 | orchestrator | common : Copying over cron logrotate config file ------------------------ 3.51s 2025-09-18 10:38:00.741438 | orchestrator | common : Ensuring config directories have correct owner and permission --- 3.39s 2025-09-18 10:38:00.741450 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 3.28s 2025-09-18 10:38:00.741462 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.13s 2025-09-18 10:38:00.741483 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 3.13s 2025-09-18 10:38:00.741503 | orchestrator | common : Creating log volume -------------------------------------------- 2.52s 2025-09-18 10:38:00.741514 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.24s 2025-09-18 10:38:00.741525 | orchestrator | service-cert-copy : common | Copying over backend internal TLS certificate --- 1.93s 2025-09-18 10:38:00.741543 | orchestrator | common : Restart systemd-tmpfiles --------------------------------------- 1.58s 2025-09-18 10:38:00.741556 | orchestrator | common : Copying over /run subdirectories conf -------------------------- 1.58s 2025-09-18 10:38:00.741569 | orchestrator | common : include_tasks -------------------------------------------------- 1.36s 2025-09-18 10:38:00.741582 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:00.741595 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:00.741608 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:00.741620 | orchestrator | 2025-09-18 10:38:00 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:00.741634 | orchestrator | 2025-09-18 10:38:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:03.776713 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:03.777023 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state STARTED 2025-09-18 10:38:03.777642 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:03.778231 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:03.780705 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:03.782110 | orchestrator | 2025-09-18 10:38:03 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:03.782134 | orchestrator | 2025-09-18 10:38:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:06.816555 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:06.816650 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state STARTED 2025-09-18 10:38:06.817095 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:06.817676 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:06.818372 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:06.820174 | orchestrator | 2025-09-18 10:38:06 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:06.820190 | orchestrator | 2025-09-18 10:38:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:09.854138 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:09.854329 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state STARTED 2025-09-18 10:38:09.854889 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:09.855489 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:09.856201 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:09.856965 | orchestrator | 2025-09-18 10:38:09 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:09.856991 | orchestrator | 2025-09-18 10:38:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:12.882071 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:12.882173 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state STARTED 2025-09-18 10:38:12.882926 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:12.883845 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:12.885263 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:12.886173 | orchestrator | 2025-09-18 10:38:12 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:12.886198 | orchestrator | 2025-09-18 10:38:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:15.920618 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:15.920857 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task db7dcb5a-05bd-4cb5-84b1-76ce45c57c2b is in state SUCCESS 2025-09-18 10:38:15.921825 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:15.922765 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:15.923677 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:15.924765 | orchestrator | 2025-09-18 10:38:15 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:15.924786 | orchestrator | 2025-09-18 10:38:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:18.958567 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:18.958683 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:18.960990 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:18.961018 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:18.961029 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:18.961954 | orchestrator | 2025-09-18 10:38:18 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:18.961977 | orchestrator | 2025-09-18 10:38:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:22.003089 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:22.003144 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:22.004392 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:22.005021 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:22.005733 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:22.008677 | orchestrator | 2025-09-18 10:38:22 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:22.008690 | orchestrator | 2025-09-18 10:38:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:25.123012 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:25.123414 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:25.124673 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:25.126085 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:25.127960 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:25.128849 | orchestrator | 2025-09-18 10:38:25 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:25.130036 | orchestrator | 2025-09-18 10:38:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:28.177251 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:28.177347 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:28.177362 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:28.181220 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:28.181258 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:28.181273 | orchestrator | 2025-09-18 10:38:28 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:28.181286 | orchestrator | 2025-09-18 10:38:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:31.796523 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:31.797174 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:31.797414 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:31.798399 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:31.800852 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:31.800876 | orchestrator | 2025-09-18 10:38:31 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:31.800889 | orchestrator | 2025-09-18 10:38:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:34.832662 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:34.835319 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:34.835809 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:34.836538 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:34.837082 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state STARTED 2025-09-18 10:38:34.837888 | orchestrator | 2025-09-18 10:38:34 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:34.837912 | orchestrator | 2025-09-18 10:38:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:37.930457 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:37.930904 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:37.931806 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:37.932603 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:37.933647 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task 18c61731-8d4a-4c25-b99a-289505680d71 is in state SUCCESS 2025-09-18 10:38:37.935491 | orchestrator | 2025-09-18 10:38:37.935530 | orchestrator | 2025-09-18 10:38:37.935544 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:38:37.935556 | orchestrator | 2025-09-18 10:38:37.935568 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:38:37.935580 | orchestrator | Thursday 18 September 2025 10:38:05 +0000 (0:00:00.327) 0:00:00.327 **** 2025-09-18 10:38:37.935591 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:38:37.935603 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:38:37.935615 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:38:37.935626 | orchestrator | 2025-09-18 10:38:37.935637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:38:37.935649 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.402) 0:00:00.729 **** 2025-09-18 10:38:37.935660 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-09-18 10:38:37.935672 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-09-18 10:38:37.935683 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-09-18 10:38:37.935694 | orchestrator | 2025-09-18 10:38:37.935706 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-09-18 10:38:37.935718 | orchestrator | 2025-09-18 10:38:37.935729 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-09-18 10:38:37.935740 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.662) 0:00:01.391 **** 2025-09-18 10:38:37.935751 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:38:37.935781 | orchestrator | 2025-09-18 10:38:37.935794 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-09-18 10:38:37.935805 | orchestrator | Thursday 18 September 2025 10:38:07 +0000 (0:00:00.636) 0:00:02.028 **** 2025-09-18 10:38:37.935817 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-18 10:38:37.935828 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-18 10:38:37.935840 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-18 10:38:37.935851 | orchestrator | 2025-09-18 10:38:37.935862 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-09-18 10:38:37.935874 | orchestrator | Thursday 18 September 2025 10:38:08 +0000 (0:00:00.921) 0:00:02.949 **** 2025-09-18 10:38:37.935885 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-09-18 10:38:37.935897 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-09-18 10:38:37.935908 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-09-18 10:38:37.935920 | orchestrator | 2025-09-18 10:38:37.935931 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-09-18 10:38:37.935942 | orchestrator | Thursday 18 September 2025 10:38:10 +0000 (0:00:02.182) 0:00:05.131 **** 2025-09-18 10:38:37.935972 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:37.935984 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:37.935996 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:37.936007 | orchestrator | 2025-09-18 10:38:37.936019 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-09-18 10:38:37.936030 | orchestrator | Thursday 18 September 2025 10:38:12 +0000 (0:00:01.977) 0:00:07.109 **** 2025-09-18 10:38:37.936042 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:37.936053 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:37.936064 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:37.936076 | orchestrator | 2025-09-18 10:38:37.936088 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:38:37.936101 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.936115 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.936127 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.936139 | orchestrator | 2025-09-18 10:38:37.936151 | orchestrator | 2025-09-18 10:38:37.936163 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:38:37.936176 | orchestrator | Thursday 18 September 2025 10:38:14 +0000 (0:00:02.242) 0:00:09.352 **** 2025-09-18 10:38:37.936188 | orchestrator | =============================================================================== 2025-09-18 10:38:37.936200 | orchestrator | memcached : Restart memcached container --------------------------------- 2.24s 2025-09-18 10:38:37.936212 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.18s 2025-09-18 10:38:37.936225 | orchestrator | memcached : Check memcached container ----------------------------------- 1.98s 2025-09-18 10:38:37.936237 | orchestrator | memcached : Ensuring config directories exist --------------------------- 0.92s 2025-09-18 10:38:37.936248 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.66s 2025-09-18 10:38:37.936261 | orchestrator | memcached : include_tasks ----------------------------------------------- 0.64s 2025-09-18 10:38:37.936273 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-09-18 10:38:37.936285 | orchestrator | 2025-09-18 10:38:37.936297 | orchestrator | 2025-09-18 10:38:37.936310 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:38:37.936322 | orchestrator | 2025-09-18 10:38:37.936334 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:38:37.936346 | orchestrator | Thursday 18 September 2025 10:38:05 +0000 (0:00:00.485) 0:00:00.485 **** 2025-09-18 10:38:37.936358 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:38:37.936370 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:38:37.936383 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:38:37.936395 | orchestrator | 2025-09-18 10:38:37.936408 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:38:37.936433 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.573) 0:00:01.058 **** 2025-09-18 10:38:37.936445 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-09-18 10:38:37.936457 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-09-18 10:38:37.936468 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-09-18 10:38:37.936479 | orchestrator | 2025-09-18 10:38:37.936491 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-09-18 10:38:37.936502 | orchestrator | 2025-09-18 10:38:37.936513 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-09-18 10:38:37.936525 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.627) 0:00:01.685 **** 2025-09-18 10:38:37.936536 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:38:37.936555 | orchestrator | 2025-09-18 10:38:37.936566 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-09-18 10:38:37.936578 | orchestrator | Thursday 18 September 2025 10:38:08 +0000 (0:00:01.092) 0:00:02.777 **** 2025-09-18 10:38:37.936591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936645 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936658 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936696 | orchestrator | 2025-09-18 10:38:37.936708 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-09-18 10:38:37.936720 | orchestrator | Thursday 18 September 2025 10:38:09 +0000 (0:00:01.628) 0:00:04.406 **** 2025-09-18 10:38:37.936732 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936838 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936850 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936868 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936887 | orchestrator | 2025-09-18 10:38:37.936898 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-09-18 10:38:37.936909 | orchestrator | Thursday 18 September 2025 10:38:12 +0000 (0:00:02.774) 0:00:07.181 **** 2025-09-18 10:38:37.936920 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936932 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.936984 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937002 | orchestrator | 2025-09-18 10:38:37.937018 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-09-18 10:38:37.937029 | orchestrator | Thursday 18 September 2025 10:38:15 +0000 (0:00:02.899) 0:00:10.080 **** 2025-09-18 10:38:37.937041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937052 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937064 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/redis:2024.2', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937080 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937104 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/redis-sentinel:2024.2', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-09-18 10:38:37.937121 | orchestrator | 2025-09-18 10:38:37.937132 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 10:38:37.937143 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:02.295) 0:00:12.375 **** 2025-09-18 10:38:37.937154 | orchestrator | 2025-09-18 10:38:37.937165 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 10:38:37.937181 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:00.140) 0:00:12.516 **** 2025-09-18 10:38:37.937192 | orchestrator | 2025-09-18 10:38:37.937203 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-09-18 10:38:37.937214 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:00.127) 0:00:12.643 **** 2025-09-18 10:38:37.937225 | orchestrator | 2025-09-18 10:38:37.937236 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-09-18 10:38:37.937247 | orchestrator | Thursday 18 September 2025 10:38:18 +0000 (0:00:00.159) 0:00:12.803 **** 2025-09-18 10:38:37.937257 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:37.937269 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:37.937280 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:37.937290 | orchestrator | 2025-09-18 10:38:37.937301 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-09-18 10:38:37.937312 | orchestrator | Thursday 18 September 2025 10:38:27 +0000 (0:00:09.046) 0:00:21.850 **** 2025-09-18 10:38:37.937323 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:38:37.937334 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:38:37.937344 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:38:37.937355 | orchestrator | 2025-09-18 10:38:37.937366 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:38:37.937377 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.937389 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.937399 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:38:37.937410 | orchestrator | 2025-09-18 10:38:37.937421 | orchestrator | 2025-09-18 10:38:37.937432 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:38:37.937443 | orchestrator | Thursday 18 September 2025 10:38:35 +0000 (0:00:07.892) 0:00:29.742 **** 2025-09-18 10:38:37.937453 | orchestrator | =============================================================================== 2025-09-18 10:38:37.937464 | orchestrator | redis : Restart redis container ----------------------------------------- 9.05s 2025-09-18 10:38:37.937475 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 7.89s 2025-09-18 10:38:37.937486 | orchestrator | redis : Copying over redis config files --------------------------------- 2.90s 2025-09-18 10:38:37.937497 | orchestrator | redis : Copying over default config.json files -------------------------- 2.77s 2025-09-18 10:38:37.937507 | orchestrator | redis : Check redis containers ------------------------------------------ 2.30s 2025-09-18 10:38:37.937523 | orchestrator | redis : Ensuring config directories exist ------------------------------- 1.63s 2025-09-18 10:38:37.937534 | orchestrator | redis : include_tasks --------------------------------------------------- 1.09s 2025-09-18 10:38:37.937545 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.63s 2025-09-18 10:38:37.937556 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.57s 2025-09-18 10:38:37.937566 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.43s 2025-09-18 10:38:37.937577 | orchestrator | 2025-09-18 10:38:37 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:37.937596 | orchestrator | 2025-09-18 10:38:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:41.073723 | orchestrator | 2025-09-18 10:38:41 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:41.082588 | orchestrator | 2025-09-18 10:38:41 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:41.089882 | orchestrator | 2025-09-18 10:38:41 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:41.090103 | orchestrator | 2025-09-18 10:38:41 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:41.090902 | orchestrator | 2025-09-18 10:38:41 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:41.091189 | orchestrator | 2025-09-18 10:38:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:44.129847 | orchestrator | 2025-09-18 10:38:44 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:44.130793 | orchestrator | 2025-09-18 10:38:44 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:44.130927 | orchestrator | 2025-09-18 10:38:44 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:44.132164 | orchestrator | 2025-09-18 10:38:44 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:44.132839 | orchestrator | 2025-09-18 10:38:44 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:44.133017 | orchestrator | 2025-09-18 10:38:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:47.249193 | orchestrator | 2025-09-18 10:38:47 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:47.249274 | orchestrator | 2025-09-18 10:38:47 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:47.249696 | orchestrator | 2025-09-18 10:38:47 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:47.250123 | orchestrator | 2025-09-18 10:38:47 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:47.250662 | orchestrator | 2025-09-18 10:38:47 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:47.250683 | orchestrator | 2025-09-18 10:38:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:50.289152 | orchestrator | 2025-09-18 10:38:50 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:50.290892 | orchestrator | 2025-09-18 10:38:50 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:50.291882 | orchestrator | 2025-09-18 10:38:50 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:50.293020 | orchestrator | 2025-09-18 10:38:50 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:50.293982 | orchestrator | 2025-09-18 10:38:50 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:50.295001 | orchestrator | 2025-09-18 10:38:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:53.391181 | orchestrator | 2025-09-18 10:38:53 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:53.391268 | orchestrator | 2025-09-18 10:38:53 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:53.391284 | orchestrator | 2025-09-18 10:38:53 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:53.391321 | orchestrator | 2025-09-18 10:38:53 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:53.391333 | orchestrator | 2025-09-18 10:38:53 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:53.391345 | orchestrator | 2025-09-18 10:38:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:56.514535 | orchestrator | 2025-09-18 10:38:56 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:56.514620 | orchestrator | 2025-09-18 10:38:56 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:56.514636 | orchestrator | 2025-09-18 10:38:56 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:56.514648 | orchestrator | 2025-09-18 10:38:56 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:56.514659 | orchestrator | 2025-09-18 10:38:56 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:56.514671 | orchestrator | 2025-09-18 10:38:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:38:59.510460 | orchestrator | 2025-09-18 10:38:59 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:38:59.510671 | orchestrator | 2025-09-18 10:38:59 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:38:59.511522 | orchestrator | 2025-09-18 10:38:59 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:38:59.513174 | orchestrator | 2025-09-18 10:38:59 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:38:59.513974 | orchestrator | 2025-09-18 10:38:59 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:38:59.514166 | orchestrator | 2025-09-18 10:38:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:02.617096 | orchestrator | 2025-09-18 10:39:02 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:02.617197 | orchestrator | 2025-09-18 10:39:02 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:02.617214 | orchestrator | 2025-09-18 10:39:02 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:02.619664 | orchestrator | 2025-09-18 10:39:02 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:02.619690 | orchestrator | 2025-09-18 10:39:02 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:39:02.619701 | orchestrator | 2025-09-18 10:39:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:05.641980 | orchestrator | 2025-09-18 10:39:05 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:05.645561 | orchestrator | 2025-09-18 10:39:05 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:05.645625 | orchestrator | 2025-09-18 10:39:05 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:05.645639 | orchestrator | 2025-09-18 10:39:05 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:05.645940 | orchestrator | 2025-09-18 10:39:05 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:39:05.645966 | orchestrator | 2025-09-18 10:39:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:08.686817 | orchestrator | 2025-09-18 10:39:08 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:08.687028 | orchestrator | 2025-09-18 10:39:08 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:08.687694 | orchestrator | 2025-09-18 10:39:08 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:08.688330 | orchestrator | 2025-09-18 10:39:08 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:08.689101 | orchestrator | 2025-09-18 10:39:08 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state STARTED 2025-09-18 10:39:08.689124 | orchestrator | 2025-09-18 10:39:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:11.939194 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:11.939806 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:11.941035 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:11.941828 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:11.942879 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:11.944807 | orchestrator | 2025-09-18 10:39:11 | INFO  | Task 0008db4c-bc75-4487-bbd4-d7b89199e4f3 is in state SUCCESS 2025-09-18 10:39:11.944879 | orchestrator | 2025-09-18 10:39:11.947190 | orchestrator | 2025-09-18 10:39:11.947230 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:39:11.947243 | orchestrator | 2025-09-18 10:39:11.947254 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:39:11.947266 | orchestrator | Thursday 18 September 2025 10:38:05 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-18 10:39:11.947278 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:11.947290 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:11.947301 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:11.947313 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:11.947325 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:11.947337 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:11.947348 | orchestrator | 2025-09-18 10:39:11.947360 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:39:11.947371 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:01.035) 0:00:01.295 **** 2025-09-18 10:39:11.947382 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947394 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947405 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947416 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947427 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947438 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-09-18 10:39:11.947449 | orchestrator | 2025-09-18 10:39:11.947460 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-09-18 10:39:11.947472 | orchestrator | 2025-09-18 10:39:11.947483 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-09-18 10:39:11.947494 | orchestrator | Thursday 18 September 2025 10:38:07 +0000 (0:00:00.883) 0:00:02.178 **** 2025-09-18 10:39:11.947506 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:39:11.947519 | orchestrator | 2025-09-18 10:39:11.947530 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 10:39:11.947541 | orchestrator | Thursday 18 September 2025 10:38:08 +0000 (0:00:01.550) 0:00:03.729 **** 2025-09-18 10:39:11.947571 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-18 10:39:11.947583 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-18 10:39:11.947594 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-18 10:39:11.947605 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-18 10:39:11.947616 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-18 10:39:11.947627 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-18 10:39:11.947638 | orchestrator | 2025-09-18 10:39:11.947649 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 10:39:11.947660 | orchestrator | Thursday 18 September 2025 10:38:10 +0000 (0:00:01.293) 0:00:05.023 **** 2025-09-18 10:39:11.947671 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-09-18 10:39:11.947682 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-09-18 10:39:11.947693 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-09-18 10:39:11.947731 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-09-18 10:39:11.947743 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-09-18 10:39:11.947753 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-09-18 10:39:11.947764 | orchestrator | 2025-09-18 10:39:11.947775 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 10:39:11.947786 | orchestrator | Thursday 18 September 2025 10:38:11 +0000 (0:00:01.684) 0:00:06.707 **** 2025-09-18 10:39:11.947797 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-09-18 10:39:11.947808 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:11.947819 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-09-18 10:39:11.947830 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:11.947841 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-09-18 10:39:11.947851 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:11.947862 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-09-18 10:39:11.947873 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:11.947884 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-09-18 10:39:11.947894 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:11.947905 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-09-18 10:39:11.947916 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:11.947926 | orchestrator | 2025-09-18 10:39:11.947937 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-09-18 10:39:11.947948 | orchestrator | Thursday 18 September 2025 10:38:13 +0000 (0:00:01.307) 0:00:08.014 **** 2025-09-18 10:39:11.947959 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:11.947970 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:11.947981 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:11.947991 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:11.948002 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:11.948013 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:11.948023 | orchestrator | 2025-09-18 10:39:11.948034 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-09-18 10:39:11.948045 | orchestrator | Thursday 18 September 2025 10:38:13 +0000 (0:00:00.677) 0:00:08.691 **** 2025-09-18 10:39:11.948081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948107 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948119 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948131 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948143 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948208 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948220 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948231 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948261 | orchestrator | 2025-09-18 10:39:11.948280 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-09-18 10:39:11.948291 | orchestrator | Thursday 18 September 2025 10:38:15 +0000 (0:00:01.648) 0:00:10.340 **** 2025-09-18 10:39:11.948303 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948315 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948326 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948349 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948377 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948395 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948419 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948430 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948442 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948471 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948483 | orchestrator | 2025-09-18 10:39:11.948494 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-09-18 10:39:11.948506 | orchestrator | Thursday 18 September 2025 10:38:18 +0000 (0:00:03.373) 0:00:13.714 **** 2025-09-18 10:39:11.948517 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:11.948528 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:11.948539 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:11.948550 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:11.948561 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:11.948572 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:11.948583 | orchestrator | 2025-09-18 10:39:11.948594 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-09-18 10:39:11.948605 | orchestrator | Thursday 18 September 2025 10:38:20 +0000 (0:00:01.683) 0:00:15.397 **** 2025-09-18 10:39:11.948616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948628 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948640 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948651 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948679 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948691 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/openvswitch-db-server:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948702 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948760 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948778 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948806 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948819 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/openvswitch-vswitchd:2024.2', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-09-18 10:39:11.948830 | orchestrator | 2025-09-18 10:39:11.948842 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.948853 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:02.285) 0:00:17.683 **** 2025-09-18 10:39:11.948864 | orchestrator | 2025-09-18 10:39:11.948875 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.948886 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.269) 0:00:17.953 **** 2025-09-18 10:39:11.948897 | orchestrator | 2025-09-18 10:39:11.948908 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.948919 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.181) 0:00:18.135 **** 2025-09-18 10:39:11.948930 | orchestrator | 2025-09-18 10:39:11.948941 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.948952 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.176) 0:00:18.312 **** 2025-09-18 10:39:11.948963 | orchestrator | 2025-09-18 10:39:11.948974 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.948985 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.130) 0:00:18.443 **** 2025-09-18 10:39:11.948996 | orchestrator | 2025-09-18 10:39:11.949007 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-09-18 10:39:11.949017 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.128) 0:00:18.571 **** 2025-09-18 10:39:11.949028 | orchestrator | 2025-09-18 10:39:11.949039 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-09-18 10:39:11.949050 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.152) 0:00:18.723 **** 2025-09-18 10:39:11.949061 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:11.949078 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:11.949089 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:11.949100 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:11.949111 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:11.949122 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:11.949133 | orchestrator | 2025-09-18 10:39:11.949144 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-09-18 10:39:11.949155 | orchestrator | Thursday 18 September 2025 10:38:35 +0000 (0:00:11.121) 0:00:29.844 **** 2025-09-18 10:39:11.949166 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:11.949177 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:11.949188 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:11.949199 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:11.949210 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:11.949220 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:11.949231 | orchestrator | 2025-09-18 10:39:11.949242 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-18 10:39:11.949253 | orchestrator | Thursday 18 September 2025 10:38:36 +0000 (0:00:01.287) 0:00:31.132 **** 2025-09-18 10:39:11.949264 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:11.949275 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:11.949286 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:11.949297 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:11.949308 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:11.949319 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:11.949329 | orchestrator | 2025-09-18 10:39:11.949340 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-09-18 10:39:11.949351 | orchestrator | Thursday 18 September 2025 10:38:46 +0000 (0:00:09.903) 0:00:41.035 **** 2025-09-18 10:39:11.949363 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-09-18 10:39:11.949374 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-09-18 10:39:11.949385 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-09-18 10:39:11.949396 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-09-18 10:39:11.949411 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-09-18 10:39:11.949428 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-09-18 10:39:11.949439 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-09-18 10:39:11.949450 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-09-18 10:39:11.949461 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-09-18 10:39:11.949472 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-09-18 10:39:11.949482 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-09-18 10:39:11.949493 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-09-18 10:39:11.949504 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949514 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949525 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949536 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949552 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949563 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-09-18 10:39:11.949574 | orchestrator | 2025-09-18 10:39:11.949584 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-09-18 10:39:11.949595 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:07.148) 0:00:48.184 **** 2025-09-18 10:39:11.949606 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-09-18 10:39:11.949617 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:11.949628 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-09-18 10:39:11.949638 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:11.949649 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-09-18 10:39:11.949660 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:11.949671 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-09-18 10:39:11.949681 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-09-18 10:39:11.949692 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-09-18 10:39:11.949703 | orchestrator | 2025-09-18 10:39:11.949806 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-09-18 10:39:11.949817 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:02.948) 0:00:51.133 **** 2025-09-18 10:39:11.949828 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-09-18 10:39:11.949839 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:11.949850 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-09-18 10:39:11.949861 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:11.949871 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-09-18 10:39:11.949882 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:11.949891 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-09-18 10:39:11.949901 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-09-18 10:39:11.949911 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-09-18 10:39:11.949920 | orchestrator | 2025-09-18 10:39:11.949930 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-09-18 10:39:11.949939 | orchestrator | Thursday 18 September 2025 10:39:00 +0000 (0:00:03.813) 0:00:54.946 **** 2025-09-18 10:39:11.949949 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:11.949959 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:11.949968 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:11.949978 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:11.949988 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:11.949997 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:11.950007 | orchestrator | 2025-09-18 10:39:11.950083 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:39:11.950095 | orchestrator | testbed-node-0 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:39:11.950105 | orchestrator | testbed-node-1 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:39:11.950115 | orchestrator | testbed-node-2 : ok=15  changed=11  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:39:11.950125 | orchestrator | testbed-node-3 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:39:11.950140 | orchestrator | testbed-node-4 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:39:11.950165 | orchestrator | testbed-node-5 : ok=13  changed=9  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:39:11.950175 | orchestrator | 2025-09-18 10:39:11.950185 | orchestrator | 2025-09-18 10:39:11.950195 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:39:11.950205 | orchestrator | Thursday 18 September 2025 10:39:09 +0000 (0:00:08.981) 0:01:03.928 **** 2025-09-18 10:39:11.950214 | orchestrator | =============================================================================== 2025-09-18 10:39:11.950224 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.89s 2025-09-18 10:39:11.950233 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 11.12s 2025-09-18 10:39:11.950243 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.15s 2025-09-18 10:39:11.950253 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.81s 2025-09-18 10:39:11.950262 | orchestrator | openvswitch : Copying over config.json files for services --------------- 3.37s 2025-09-18 10:39:11.950272 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.95s 2025-09-18 10:39:11.950281 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.29s 2025-09-18 10:39:11.950291 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 1.68s 2025-09-18 10:39:11.950300 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.68s 2025-09-18 10:39:11.950309 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.65s 2025-09-18 10:39:11.950319 | orchestrator | openvswitch : include_tasks --------------------------------------------- 1.55s 2025-09-18 10:39:11.950328 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.31s 2025-09-18 10:39:11.950338 | orchestrator | module-load : Load modules ---------------------------------------------- 1.29s 2025-09-18 10:39:11.950347 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 1.29s 2025-09-18 10:39:11.950357 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.04s 2025-09-18 10:39:11.950366 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.04s 2025-09-18 10:39:11.950376 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.88s 2025-09-18 10:39:11.950385 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.68s 2025-09-18 10:39:11.950395 | orchestrator | 2025-09-18 10:39:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:15.119010 | orchestrator | 2025-09-18 10:39:15 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:15.119372 | orchestrator | 2025-09-18 10:39:15 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:15.120419 | orchestrator | 2025-09-18 10:39:15 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:15.122395 | orchestrator | 2025-09-18 10:39:15 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:15.123259 | orchestrator | 2025-09-18 10:39:15 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:15.123293 | orchestrator | 2025-09-18 10:39:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:18.239928 | orchestrator | 2025-09-18 10:39:18 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:18.240032 | orchestrator | 2025-09-18 10:39:18 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:18.241125 | orchestrator | 2025-09-18 10:39:18 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:18.241846 | orchestrator | 2025-09-18 10:39:18 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:18.242518 | orchestrator | 2025-09-18 10:39:18 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:18.242625 | orchestrator | 2025-09-18 10:39:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:21.301927 | orchestrator | 2025-09-18 10:39:21 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:21.302003 | orchestrator | 2025-09-18 10:39:21 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:21.302477 | orchestrator | 2025-09-18 10:39:21 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state STARTED 2025-09-18 10:39:21.303161 | orchestrator | 2025-09-18 10:39:21 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:21.303936 | orchestrator | 2025-09-18 10:39:21 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:21.304029 | orchestrator | 2025-09-18 10:39:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:24.341428 | orchestrator | 2025-09-18 10:39:24.341516 | orchestrator | 2025-09-18 10:39:24.341530 | orchestrator | PLAY [Prepare all k3s nodes] *************************************************** 2025-09-18 10:39:24.341541 | orchestrator | 2025-09-18 10:39:24.341551 | orchestrator | TASK [k3s_prereq : Validating arguments against arg spec 'main' - Prerequisites] *** 2025-09-18 10:39:24.341562 | orchestrator | Thursday 18 September 2025 10:35:32 +0000 (0:00:00.123) 0:00:00.123 **** 2025-09-18 10:39:24.341572 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.341583 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.341593 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.341602 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.341612 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.341622 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.341632 | orchestrator | 2025-09-18 10:39:24.341642 | orchestrator | TASK [k3s_prereq : Set same timezone on every Server] ************************** 2025-09-18 10:39:24.341652 | orchestrator | Thursday 18 September 2025 10:35:33 +0000 (0:00:00.657) 0:00:00.780 **** 2025-09-18 10:39:24.341661 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.341672 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.341681 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.341718 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.341727 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.341737 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.341747 | orchestrator | 2025-09-18 10:39:24.341757 | orchestrator | TASK [k3s_prereq : Set SELinux to disabled state] ****************************** 2025-09-18 10:39:24.341766 | orchestrator | Thursday 18 September 2025 10:35:34 +0000 (0:00:00.606) 0:00:01.387 **** 2025-09-18 10:39:24.341776 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.341786 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.341795 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.341805 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.341815 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.341824 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.341834 | orchestrator | 2025-09-18 10:39:24.341843 | orchestrator | TASK [k3s_prereq : Enable IPv4 forwarding] ************************************* 2025-09-18 10:39:24.341853 | orchestrator | Thursday 18 September 2025 10:35:34 +0000 (0:00:00.729) 0:00:02.117 **** 2025-09-18 10:39:24.341862 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.341872 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.341881 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.341891 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.341901 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.341910 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.341920 | orchestrator | 2025-09-18 10:39:24.341929 | orchestrator | TASK [k3s_prereq : Enable IPv6 forwarding] ************************************* 2025-09-18 10:39:24.341961 | orchestrator | Thursday 18 September 2025 10:35:37 +0000 (0:00:02.714) 0:00:04.831 **** 2025-09-18 10:39:24.341974 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.341986 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.341997 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.342007 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.342103 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.342115 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.342125 | orchestrator | 2025-09-18 10:39:24.342136 | orchestrator | TASK [k3s_prereq : Enable IPv6 router advertisements] ************************** 2025-09-18 10:39:24.342147 | orchestrator | Thursday 18 September 2025 10:35:38 +0000 (0:00:00.903) 0:00:05.734 **** 2025-09-18 10:39:24.342158 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.342168 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.342179 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.342190 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.342200 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.342210 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.342221 | orchestrator | 2025-09-18 10:39:24.342231 | orchestrator | TASK [k3s_prereq : Add br_netfilter to /etc/modules-load.d/] ******************* 2025-09-18 10:39:24.342242 | orchestrator | Thursday 18 September 2025 10:35:40 +0000 (0:00:02.216) 0:00:07.951 **** 2025-09-18 10:39:24.342253 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.342263 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.342274 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.342284 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.342295 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.342306 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.342317 | orchestrator | 2025-09-18 10:39:24.342327 | orchestrator | TASK [k3s_prereq : Load br_netfilter] ****************************************** 2025-09-18 10:39:24.342336 | orchestrator | Thursday 18 September 2025 10:35:41 +0000 (0:00:00.651) 0:00:08.603 **** 2025-09-18 10:39:24.342346 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.342356 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.342365 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.342375 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.342384 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.342394 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.342404 | orchestrator | 2025-09-18 10:39:24.342414 | orchestrator | TASK [k3s_prereq : Set bridge-nf-call-iptables (just to be sure)] ************** 2025-09-18 10:39:24.342423 | orchestrator | Thursday 18 September 2025 10:35:42 +0000 (0:00:01.146) 0:00:09.750 **** 2025-09-18 10:39:24.342433 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342443 | orchestrator | skipping: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342452 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.342462 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342472 | orchestrator | skipping: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342481 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.342491 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342501 | orchestrator | skipping: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342510 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.342520 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342555 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342566 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.342576 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342586 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342603 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.342613 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 10:39:24.342623 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 10:39:24.342632 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.342642 | orchestrator | 2025-09-18 10:39:24.342651 | orchestrator | TASK [k3s_prereq : Add /usr/local/bin to sudo secure_path] ********************* 2025-09-18 10:39:24.342661 | orchestrator | Thursday 18 September 2025 10:35:43 +0000 (0:00:00.756) 0:00:10.506 **** 2025-09-18 10:39:24.342670 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.342680 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.342707 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.342717 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.342726 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.342736 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.342745 | orchestrator | 2025-09-18 10:39:24.342755 | orchestrator | TASK [k3s_download : Validating arguments against arg spec 'main' - Manage the downloading of K3S binaries] *** 2025-09-18 10:39:24.342766 | orchestrator | Thursday 18 September 2025 10:35:44 +0000 (0:00:01.560) 0:00:12.067 **** 2025-09-18 10:39:24.342776 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.342786 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.342795 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.342805 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.342814 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.342824 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.342834 | orchestrator | 2025-09-18 10:39:24.342843 | orchestrator | TASK [k3s_download : Download k3s binary x64] ********************************** 2025-09-18 10:39:24.342853 | orchestrator | Thursday 18 September 2025 10:35:45 +0000 (0:00:01.025) 0:00:13.092 **** 2025-09-18 10:39:24.342863 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.342872 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.342882 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.342892 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.342901 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.342911 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.342920 | orchestrator | 2025-09-18 10:39:24.342930 | orchestrator | TASK [k3s_download : Download k3s binary arm64] ******************************** 2025-09-18 10:39:24.342940 | orchestrator | Thursday 18 September 2025 10:35:50 +0000 (0:00:05.132) 0:00:18.225 **** 2025-09-18 10:39:24.342950 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.342959 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.342969 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.342978 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.342988 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.342998 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.343007 | orchestrator | 2025-09-18 10:39:24.343017 | orchestrator | TASK [k3s_download : Download k3s binary armhf] ******************************** 2025-09-18 10:39:24.343027 | orchestrator | Thursday 18 September 2025 10:35:52 +0000 (0:00:01.682) 0:00:19.907 **** 2025-09-18 10:39:24.343036 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.343046 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.343056 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.343065 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.343075 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.343084 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.343094 | orchestrator | 2025-09-18 10:39:24.343104 | orchestrator | TASK [k3s_custom_registries : Validating arguments against arg spec 'main' - Configure the use of a custom container registry] *** 2025-09-18 10:39:24.343115 | orchestrator | Thursday 18 September 2025 10:35:54 +0000 (0:00:01.714) 0:00:21.622 **** 2025-09-18 10:39:24.343125 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.343135 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.343144 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.343160 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343169 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343179 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343188 | orchestrator | 2025-09-18 10:39:24.343198 | orchestrator | TASK [k3s_custom_registries : Create directory /etc/rancher/k3s] *************** 2025-09-18 10:39:24.343208 | orchestrator | Thursday 18 September 2025 10:35:55 +0000 (0:00:01.414) 0:00:23.036 **** 2025-09-18 10:39:24.343217 | orchestrator | changed: [testbed-node-4] => (item=rancher) 2025-09-18 10:39:24.343227 | orchestrator | changed: [testbed-node-5] => (item=rancher) 2025-09-18 10:39:24.343237 | orchestrator | changed: [testbed-node-3] => (item=rancher) 2025-09-18 10:39:24.343247 | orchestrator | changed: [testbed-node-5] => (item=rancher/k3s) 2025-09-18 10:39:24.343256 | orchestrator | changed: [testbed-node-4] => (item=rancher/k3s) 2025-09-18 10:39:24.343266 | orchestrator | changed: [testbed-node-3] => (item=rancher/k3s) 2025-09-18 10:39:24.343276 | orchestrator | changed: [testbed-node-0] => (item=rancher) 2025-09-18 10:39:24.343285 | orchestrator | changed: [testbed-node-1] => (item=rancher) 2025-09-18 10:39:24.343295 | orchestrator | changed: [testbed-node-2] => (item=rancher) 2025-09-18 10:39:24.343304 | orchestrator | changed: [testbed-node-0] => (item=rancher/k3s) 2025-09-18 10:39:24.343314 | orchestrator | changed: [testbed-node-1] => (item=rancher/k3s) 2025-09-18 10:39:24.343323 | orchestrator | changed: [testbed-node-2] => (item=rancher/k3s) 2025-09-18 10:39:24.343333 | orchestrator | 2025-09-18 10:39:24.343343 | orchestrator | TASK [k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml] *** 2025-09-18 10:39:24.343353 | orchestrator | Thursday 18 September 2025 10:35:57 +0000 (0:00:02.193) 0:00:25.230 **** 2025-09-18 10:39:24.343363 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.343372 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.343382 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.343391 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.343401 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.343411 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.343420 | orchestrator | 2025-09-18 10:39:24.343440 | orchestrator | PLAY [Deploy k3s master nodes] ************************************************* 2025-09-18 10:39:24.343450 | orchestrator | 2025-09-18 10:39:24.343460 | orchestrator | TASK [k3s_server : Validating arguments against arg spec 'main' - Setup k3s servers] *** 2025-09-18 10:39:24.343470 | orchestrator | Thursday 18 September 2025 10:35:59 +0000 (0:00:01.967) 0:00:27.197 **** 2025-09-18 10:39:24.343479 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343489 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343499 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343508 | orchestrator | 2025-09-18 10:39:24.343518 | orchestrator | TASK [k3s_server : Stop k3s-init] ********************************************** 2025-09-18 10:39:24.343528 | orchestrator | Thursday 18 September 2025 10:36:00 +0000 (0:00:01.036) 0:00:28.234 **** 2025-09-18 10:39:24.343537 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343547 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343557 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343566 | orchestrator | 2025-09-18 10:39:24.343576 | orchestrator | TASK [k3s_server : Stop k3s] *************************************************** 2025-09-18 10:39:24.343586 | orchestrator | Thursday 18 September 2025 10:36:01 +0000 (0:00:01.000) 0:00:29.235 **** 2025-09-18 10:39:24.343596 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343605 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343615 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343624 | orchestrator | 2025-09-18 10:39:24.343634 | orchestrator | TASK [k3s_server : Clean previous runs of k3s-init] **************************** 2025-09-18 10:39:24.343644 | orchestrator | Thursday 18 September 2025 10:36:03 +0000 (0:00:01.521) 0:00:30.756 **** 2025-09-18 10:39:24.343653 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343663 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343672 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343682 | orchestrator | 2025-09-18 10:39:24.343762 | orchestrator | TASK [k3s_server : Deploy K3s http_proxy conf] ********************************* 2025-09-18 10:39:24.343783 | orchestrator | Thursday 18 September 2025 10:36:04 +0000 (0:00:01.360) 0:00:32.117 **** 2025-09-18 10:39:24.343793 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.343803 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.343813 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.343822 | orchestrator | 2025-09-18 10:39:24.343832 | orchestrator | TASK [k3s_server : Create /etc/rancher/k3s directory] ************************** 2025-09-18 10:39:24.343841 | orchestrator | Thursday 18 September 2025 10:36:05 +0000 (0:00:00.387) 0:00:32.504 **** 2025-09-18 10:39:24.343851 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.343861 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.343871 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.343881 | orchestrator | 2025-09-18 10:39:24.343890 | orchestrator | TASK [k3s_server : Create custom resolv.conf for k3s] ************************** 2025-09-18 10:39:24.343900 | orchestrator | Thursday 18 September 2025 10:36:05 +0000 (0:00:00.691) 0:00:33.195 **** 2025-09-18 10:39:24.343910 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.343919 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.343929 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.343939 | orchestrator | 2025-09-18 10:39:24.343948 | orchestrator | TASK [k3s_server : Deploy vip manifest] **************************************** 2025-09-18 10:39:24.343958 | orchestrator | Thursday 18 September 2025 10:36:07 +0000 (0:00:01.542) 0:00:34.738 **** 2025-09-18 10:39:24.343968 | orchestrator | included: /ansible/roles/k3s_server/tasks/vip.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:39:24.343978 | orchestrator | 2025-09-18 10:39:24.343987 | orchestrator | TASK [k3s_server : Set _kube_vip_bgp_peers fact] ******************************* 2025-09-18 10:39:24.343997 | orchestrator | Thursday 18 September 2025 10:36:08 +0000 (0:00:00.777) 0:00:35.515 **** 2025-09-18 10:39:24.344006 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.344016 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.344026 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.344035 | orchestrator | 2025-09-18 10:39:24.344045 | orchestrator | TASK [k3s_server : Create manifests directory on first master] ***************** 2025-09-18 10:39:24.344055 | orchestrator | Thursday 18 September 2025 10:36:10 +0000 (0:00:02.736) 0:00:38.251 **** 2025-09-18 10:39:24.344065 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344074 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344084 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344094 | orchestrator | 2025-09-18 10:39:24.344103 | orchestrator | TASK [k3s_server : Download vip rbac manifest to first master] ***************** 2025-09-18 10:39:24.344113 | orchestrator | Thursday 18 September 2025 10:36:11 +0000 (0:00:00.824) 0:00:39.076 **** 2025-09-18 10:39:24.344123 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344132 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344142 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344152 | orchestrator | 2025-09-18 10:39:24.344161 | orchestrator | TASK [k3s_server : Copy vip manifest to first master] ************************** 2025-09-18 10:39:24.344171 | orchestrator | Thursday 18 September 2025 10:36:13 +0000 (0:00:01.532) 0:00:40.608 **** 2025-09-18 10:39:24.344181 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344190 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344200 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344210 | orchestrator | 2025-09-18 10:39:24.344220 | orchestrator | TASK [k3s_server : Deploy metallb manifest] ************************************ 2025-09-18 10:39:24.344229 | orchestrator | Thursday 18 September 2025 10:36:15 +0000 (0:00:02.111) 0:00:42.720 **** 2025-09-18 10:39:24.344239 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.344249 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344258 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344268 | orchestrator | 2025-09-18 10:39:24.344278 | orchestrator | TASK [k3s_server : Deploy kube-vip manifest] *********************************** 2025-09-18 10:39:24.344288 | orchestrator | Thursday 18 September 2025 10:36:15 +0000 (0:00:00.341) 0:00:43.062 **** 2025-09-18 10:39:24.344303 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.344313 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344323 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344332 | orchestrator | 2025-09-18 10:39:24.344342 | orchestrator | TASK [k3s_server : Init cluster inside the transient k3s-init service] ********* 2025-09-18 10:39:24.344352 | orchestrator | Thursday 18 September 2025 10:36:16 +0000 (0:00:00.502) 0:00:43.564 **** 2025-09-18 10:39:24.344362 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344372 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.344381 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.344395 | orchestrator | 2025-09-18 10:39:24.344412 | orchestrator | TASK [k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails)] *** 2025-09-18 10:39:24.344422 | orchestrator | Thursday 18 September 2025 10:36:19 +0000 (0:00:03.138) 0:00:46.702 **** 2025-09-18 10:39:24.344432 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 10:39:24.344443 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 10:39:24.344452 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (20 retries left). 2025-09-18 10:39:24.344462 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 10:39:24.344472 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 10:39:24.344481 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (19 retries left). 2025-09-18 10:39:24.344491 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 10:39:24.344501 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 10:39:24.344511 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (18 retries left). 2025-09-18 10:39:24.344520 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 10:39:24.344530 | orchestrator | FAILED - RETRYING: [testbed-node-2]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 10:39:24.344540 | orchestrator | FAILED - RETRYING: [testbed-node-1]: Verify that all nodes actually joined (check k3s-init.service if this fails) (17 retries left). 2025-09-18 10:39:24.344549 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Verify that all nodes actually joined (check k3s-init.service if this fails) (16 retries left). 2025-09-18 10:39:24.344559 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.344569 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.344578 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.344588 | orchestrator | 2025-09-18 10:39:24.344598 | orchestrator | TASK [k3s_server : Save logs of k3s-init.service] ****************************** 2025-09-18 10:39:24.344607 | orchestrator | Thursday 18 September 2025 10:37:13 +0000 (0:00:53.997) 0:01:40.700 **** 2025-09-18 10:39:24.344617 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.344627 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.344636 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.344646 | orchestrator | 2025-09-18 10:39:24.344656 | orchestrator | TASK [k3s_server : Kill the temporary service used for initialization] ********* 2025-09-18 10:39:24.344665 | orchestrator | Thursday 18 September 2025 10:37:13 +0000 (0:00:00.385) 0:01:41.086 **** 2025-09-18 10:39:24.344681 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344716 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.344726 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.344736 | orchestrator | 2025-09-18 10:39:24.344746 | orchestrator | TASK [k3s_server : Copy K3s service file] ************************************** 2025-09-18 10:39:24.344756 | orchestrator | Thursday 18 September 2025 10:37:14 +0000 (0:00:01.160) 0:01:42.246 **** 2025-09-18 10:39:24.344765 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.344775 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344785 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.344794 | orchestrator | 2025-09-18 10:39:24.344804 | orchestrator | TASK [k3s_server : Enable and check K3s service] ******************************* 2025-09-18 10:39:24.344813 | orchestrator | Thursday 18 September 2025 10:37:16 +0000 (0:00:01.755) 0:01:44.002 **** 2025-09-18 10:39:24.344823 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.344833 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.344842 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.344852 | orchestrator | 2025-09-18 10:39:24.344862 | orchestrator | TASK [k3s_server : Wait for node-token] **************************************** 2025-09-18 10:39:24.344871 | orchestrator | Thursday 18 September 2025 10:37:43 +0000 (0:00:26.723) 0:02:10.726 **** 2025-09-18 10:39:24.344881 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.344890 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.344900 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.344910 | orchestrator | 2025-09-18 10:39:24.344919 | orchestrator | TASK [k3s_server : Register node-token file access mode] *********************** 2025-09-18 10:39:24.344929 | orchestrator | Thursday 18 September 2025 10:37:44 +0000 (0:00:00.670) 0:02:11.396 **** 2025-09-18 10:39:24.344939 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.344948 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.344958 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.344967 | orchestrator | 2025-09-18 10:39:24.344977 | orchestrator | TASK [k3s_server : Change file access node-token] ****************************** 2025-09-18 10:39:24.344987 | orchestrator | Thursday 18 September 2025 10:37:45 +0000 (0:00:01.700) 0:02:13.097 **** 2025-09-18 10:39:24.344996 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.345006 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.345016 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.345026 | orchestrator | 2025-09-18 10:39:24.345044 | orchestrator | TASK [k3s_server : Read node-token from master] ******************************** 2025-09-18 10:39:24.345055 | orchestrator | Thursday 18 September 2025 10:37:46 +0000 (0:00:00.713) 0:02:13.810 **** 2025-09-18 10:39:24.345064 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.345074 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.345084 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.345094 | orchestrator | 2025-09-18 10:39:24.345103 | orchestrator | TASK [k3s_server : Store Master node-token] ************************************ 2025-09-18 10:39:24.345113 | orchestrator | Thursday 18 September 2025 10:37:47 +0000 (0:00:00.913) 0:02:14.724 **** 2025-09-18 10:39:24.345123 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.345132 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.345142 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.345152 | orchestrator | 2025-09-18 10:39:24.345161 | orchestrator | TASK [k3s_server : Restore node-token file access] ***************************** 2025-09-18 10:39:24.345171 | orchestrator | Thursday 18 September 2025 10:37:47 +0000 (0:00:00.312) 0:02:15.036 **** 2025-09-18 10:39:24.345181 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.345190 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.345200 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.345209 | orchestrator | 2025-09-18 10:39:24.345219 | orchestrator | TASK [k3s_server : Create directory .kube] ************************************* 2025-09-18 10:39:24.345229 | orchestrator | Thursday 18 September 2025 10:37:48 +0000 (0:00:00.660) 0:02:15.697 **** 2025-09-18 10:39:24.345238 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.345248 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.345264 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.345274 | orchestrator | 2025-09-18 10:39:24.345283 | orchestrator | TASK [k3s_server : Copy config file to user home directory] ******************** 2025-09-18 10:39:24.345293 | orchestrator | Thursday 18 September 2025 10:37:49 +0000 (0:00:00.678) 0:02:16.376 **** 2025-09-18 10:39:24.345302 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.345312 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.345322 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.345332 | orchestrator | 2025-09-18 10:39:24.345341 | orchestrator | TASK [k3s_server : Configure kubectl cluster to https://192.168.16.8:6443] ***** 2025-09-18 10:39:24.345351 | orchestrator | Thursday 18 September 2025 10:37:50 +0000 (0:00:01.504) 0:02:17.880 **** 2025-09-18 10:39:24.345361 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:39:24.345371 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:39:24.345380 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:39:24.345390 | orchestrator | 2025-09-18 10:39:24.345399 | orchestrator | TASK [k3s_server : Create kubectl symlink] ************************************* 2025-09-18 10:39:24.345409 | orchestrator | Thursday 18 September 2025 10:37:51 +0000 (0:00:00.914) 0:02:18.795 **** 2025-09-18 10:39:24.345419 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.345428 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.345438 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.345448 | orchestrator | 2025-09-18 10:39:24.345457 | orchestrator | TASK [k3s_server : Create crictl symlink] ************************************** 2025-09-18 10:39:24.345467 | orchestrator | Thursday 18 September 2025 10:37:51 +0000 (0:00:00.310) 0:02:19.105 **** 2025-09-18 10:39:24.345476 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.345486 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.345496 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.345506 | orchestrator | 2025-09-18 10:39:24.345515 | orchestrator | TASK [k3s_server : Get contents of manifests folder] *************************** 2025-09-18 10:39:24.345525 | orchestrator | Thursday 18 September 2025 10:37:52 +0000 (0:00:00.297) 0:02:19.403 **** 2025-09-18 10:39:24.345535 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.345544 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.345554 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.345564 | orchestrator | 2025-09-18 10:39:24.345573 | orchestrator | TASK [k3s_server : Get sub dirs of manifests folder] *************************** 2025-09-18 10:39:24.345583 | orchestrator | Thursday 18 September 2025 10:37:53 +0000 (0:00:00.934) 0:02:20.337 **** 2025-09-18 10:39:24.345593 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.345603 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.345612 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.345622 | orchestrator | 2025-09-18 10:39:24.345632 | orchestrator | TASK [k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start] *** 2025-09-18 10:39:24.345642 | orchestrator | Thursday 18 September 2025 10:37:53 +0000 (0:00:00.680) 0:02:21.018 **** 2025-09-18 10:39:24.345652 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 10:39:24.345662 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 10:39:24.345672 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/rolebindings.yaml) 2025-09-18 10:39:24.345682 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 10:39:24.345706 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 10:39:24.345716 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/local-storage.yaml) 2025-09-18 10:39:24.345725 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 10:39:24.345735 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 10:39:24.345750 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/coredns.yaml) 2025-09-18 10:39:24.345760 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip.yaml) 2025-09-18 10:39:24.345769 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 10:39:24.345779 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 10:39:24.345794 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/vip-rbac.yaml) 2025-09-18 10:39:24.345804 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 10:39:24.345814 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 10:39:24.345824 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/runtimes.yaml) 2025-09-18 10:39:24.345833 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 10:39:24.345843 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 10:39:24.345853 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/ccm.yaml) 2025-09-18 10:39:24.345863 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/rancher/k3s/server/manifests/metrics-server) 2025-09-18 10:39:24.345872 | orchestrator | 2025-09-18 10:39:24.345882 | orchestrator | PLAY [Deploy k3s worker nodes] ************************************************* 2025-09-18 10:39:24.345892 | orchestrator | 2025-09-18 10:39:24.345902 | orchestrator | TASK [k3s_agent : Validating arguments against arg spec 'main' - Setup k3s agents] *** 2025-09-18 10:39:24.345911 | orchestrator | Thursday 18 September 2025 10:37:56 +0000 (0:00:03.235) 0:02:24.253 **** 2025-09-18 10:39:24.345941 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.345951 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.345961 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.345971 | orchestrator | 2025-09-18 10:39:24.345991 | orchestrator | TASK [k3s_agent : Check if system is PXE-booted] ******************************* 2025-09-18 10:39:24.346001 | orchestrator | Thursday 18 September 2025 10:37:57 +0000 (0:00:00.565) 0:02:24.819 **** 2025-09-18 10:39:24.346011 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.346065 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.346075 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.346085 | orchestrator | 2025-09-18 10:39:24.346095 | orchestrator | TASK [k3s_agent : Set fact for PXE-booted system] ****************************** 2025-09-18 10:39:24.346105 | orchestrator | Thursday 18 September 2025 10:37:58 +0000 (0:00:00.650) 0:02:25.470 **** 2025-09-18 10:39:24.346114 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.346124 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.346134 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.346143 | orchestrator | 2025-09-18 10:39:24.346808 | orchestrator | TASK [k3s_agent : Include http_proxy configuration tasks] ********************** 2025-09-18 10:39:24.346831 | orchestrator | Thursday 18 September 2025 10:37:58 +0000 (0:00:00.395) 0:02:25.865 **** 2025-09-18 10:39:24.346841 | orchestrator | included: /ansible/roles/k3s_agent/tasks/http_proxy.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:39:24.346851 | orchestrator | 2025-09-18 10:39:24.346861 | orchestrator | TASK [k3s_agent : Create k3s-node.service.d directory] ************************* 2025-09-18 10:39:24.346871 | orchestrator | Thursday 18 September 2025 10:37:59 +0000 (0:00:01.023) 0:02:26.889 **** 2025-09-18 10:39:24.346880 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.346890 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.346900 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.346909 | orchestrator | 2025-09-18 10:39:24.346919 | orchestrator | TASK [k3s_agent : Copy K3s http_proxy conf file] ******************************* 2025-09-18 10:39:24.346929 | orchestrator | Thursday 18 September 2025 10:38:00 +0000 (0:00:00.404) 0:02:27.294 **** 2025-09-18 10:39:24.346938 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.346959 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.346975 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.346991 | orchestrator | 2025-09-18 10:39:24.347008 | orchestrator | TASK [k3s_agent : Deploy K3s http_proxy conf] ********************************** 2025-09-18 10:39:24.347021 | orchestrator | Thursday 18 September 2025 10:38:00 +0000 (0:00:00.391) 0:02:27.685 **** 2025-09-18 10:39:24.347044 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.347069 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.347082 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.347097 | orchestrator | 2025-09-18 10:39:24.347111 | orchestrator | TASK [k3s_agent : Create /etc/rancher/k3s directory] *************************** 2025-09-18 10:39:24.347126 | orchestrator | Thursday 18 September 2025 10:38:00 +0000 (0:00:00.402) 0:02:28.088 **** 2025-09-18 10:39:24.347141 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.347156 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.347172 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.347188 | orchestrator | 2025-09-18 10:39:24.347213 | orchestrator | TASK [k3s_agent : Create custom resolv.conf for k3s] *************************** 2025-09-18 10:39:24.347229 | orchestrator | Thursday 18 September 2025 10:38:01 +0000 (0:00:01.005) 0:02:29.094 **** 2025-09-18 10:39:24.347244 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.347258 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.347273 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.347287 | orchestrator | 2025-09-18 10:39:24.347301 | orchestrator | TASK [k3s_agent : Configure the k3s service] *********************************** 2025-09-18 10:39:24.347316 | orchestrator | Thursday 18 September 2025 10:38:03 +0000 (0:00:01.260) 0:02:30.354 **** 2025-09-18 10:39:24.347332 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.347347 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.347362 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.347377 | orchestrator | 2025-09-18 10:39:24.347393 | orchestrator | TASK [k3s_agent : Manage k3s service] ****************************************** 2025-09-18 10:39:24.347410 | orchestrator | Thursday 18 September 2025 10:38:04 +0000 (0:00:01.306) 0:02:31.661 **** 2025-09-18 10:39:24.347425 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:39:24.347441 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:39:24.347457 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:39:24.347467 | orchestrator | 2025-09-18 10:39:24.347477 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-18 10:39:24.347487 | orchestrator | 2025-09-18 10:39:24.347496 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-18 10:39:24.347506 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:13.400) 0:02:45.062 **** 2025-09-18 10:39:24.347516 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.347526 | orchestrator | 2025-09-18 10:39:24.347549 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-18 10:39:24.347559 | orchestrator | Thursday 18 September 2025 10:38:18 +0000 (0:00:01.031) 0:02:46.093 **** 2025-09-18 10:39:24.347569 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.347579 | orchestrator | 2025-09-18 10:39:24.347589 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 10:39:24.347598 | orchestrator | Thursday 18 September 2025 10:38:19 +0000 (0:00:00.463) 0:02:46.556 **** 2025-09-18 10:39:24.347608 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 10:39:24.347618 | orchestrator | 2025-09-18 10:39:24.347628 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 10:39:24.347637 | orchestrator | Thursday 18 September 2025 10:38:19 +0000 (0:00:00.574) 0:02:47.131 **** 2025-09-18 10:39:24.347647 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.347657 | orchestrator | 2025-09-18 10:39:24.347666 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-18 10:39:24.347676 | orchestrator | Thursday 18 September 2025 10:38:20 +0000 (0:00:00.701) 0:02:47.833 **** 2025-09-18 10:39:24.347717 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.347739 | orchestrator | 2025-09-18 10:39:24.347748 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-18 10:39:24.347758 | orchestrator | Thursday 18 September 2025 10:38:21 +0000 (0:00:00.548) 0:02:48.381 **** 2025-09-18 10:39:24.347768 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 10:39:24.347777 | orchestrator | 2025-09-18 10:39:24.347787 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-18 10:39:24.347797 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:01.600) 0:02:49.982 **** 2025-09-18 10:39:24.347806 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 10:39:24.347816 | orchestrator | 2025-09-18 10:39:24.347826 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-18 10:39:24.347835 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.635) 0:02:50.617 **** 2025-09-18 10:39:24.347845 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.347854 | orchestrator | 2025-09-18 10:39:24.347864 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-18 10:39:24.347874 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.453) 0:02:51.071 **** 2025-09-18 10:39:24.347883 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.347893 | orchestrator | 2025-09-18 10:39:24.347903 | orchestrator | PLAY [Apply role kubectl] ****************************************************** 2025-09-18 10:39:24.347912 | orchestrator | 2025-09-18 10:39:24.347922 | orchestrator | TASK [kubectl : Gather variables for each operating system] ******************** 2025-09-18 10:39:24.347932 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.766) 0:02:51.838 **** 2025-09-18 10:39:24.347941 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.347951 | orchestrator | 2025-09-18 10:39:24.347961 | orchestrator | TASK [kubectl : Include distribution specific install tasks] ******************* 2025-09-18 10:39:24.347970 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.179) 0:02:52.018 **** 2025-09-18 10:39:24.347980 | orchestrator | included: /ansible/roles/kubectl/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:39:24.347990 | orchestrator | 2025-09-18 10:39:24.347999 | orchestrator | TASK [kubectl : Remove old architecture-dependent repository] ****************** 2025-09-18 10:39:24.348009 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.229) 0:02:52.248 **** 2025-09-18 10:39:24.348018 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.348028 | orchestrator | 2025-09-18 10:39:24.348038 | orchestrator | TASK [kubectl : Install apt-transport-https package] *************************** 2025-09-18 10:39:24.348047 | orchestrator | Thursday 18 September 2025 10:38:26 +0000 (0:00:01.037) 0:02:53.285 **** 2025-09-18 10:39:24.348057 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.348066 | orchestrator | 2025-09-18 10:39:24.348076 | orchestrator | TASK [kubectl : Add repository gpg key] **************************************** 2025-09-18 10:39:24.348086 | orchestrator | Thursday 18 September 2025 10:38:28 +0000 (0:00:01.979) 0:02:55.265 **** 2025-09-18 10:39:24.348095 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.348105 | orchestrator | 2025-09-18 10:39:24.348115 | orchestrator | TASK [kubectl : Set permissions of gpg key] ************************************ 2025-09-18 10:39:24.348124 | orchestrator | Thursday 18 September 2025 10:38:28 +0000 (0:00:00.977) 0:02:56.242 **** 2025-09-18 10:39:24.348134 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.348143 | orchestrator | 2025-09-18 10:39:24.348159 | orchestrator | TASK [kubectl : Add repository Debian] ***************************************** 2025-09-18 10:39:24.348169 | orchestrator | Thursday 18 September 2025 10:38:29 +0000 (0:00:00.896) 0:02:57.139 **** 2025-09-18 10:39:24.348179 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.348188 | orchestrator | 2025-09-18 10:39:24.348198 | orchestrator | TASK [kubectl : Install required packages] ************************************* 2025-09-18 10:39:24.348207 | orchestrator | Thursday 18 September 2025 10:38:36 +0000 (0:00:06.644) 0:03:03.783 **** 2025-09-18 10:39:24.348217 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.348227 | orchestrator | 2025-09-18 10:39:24.348237 | orchestrator | TASK [kubectl : Remove kubectl symlink] **************************************** 2025-09-18 10:39:24.348256 | orchestrator | Thursday 18 September 2025 10:38:50 +0000 (0:00:13.825) 0:03:17.609 **** 2025-09-18 10:39:24.348266 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.348276 | orchestrator | 2025-09-18 10:39:24.348285 | orchestrator | PLAY [Run post actions on master nodes] **************************************** 2025-09-18 10:39:24.348295 | orchestrator | 2025-09-18 10:39:24.348305 | orchestrator | TASK [k3s_server_post : Validating arguments against arg spec 'main' - Configure k3s cluster] *** 2025-09-18 10:39:24.348314 | orchestrator | Thursday 18 September 2025 10:38:50 +0000 (0:00:00.571) 0:03:18.180 **** 2025-09-18 10:39:24.348324 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.348334 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.348343 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.348353 | orchestrator | 2025-09-18 10:39:24.348363 | orchestrator | TASK [k3s_server_post : Deploy calico] ***************************************** 2025-09-18 10:39:24.348372 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:00.310) 0:03:18.490 **** 2025-09-18 10:39:24.348388 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348398 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.348408 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.348417 | orchestrator | 2025-09-18 10:39:24.348427 | orchestrator | TASK [k3s_server_post : Deploy cilium] ***************************************** 2025-09-18 10:39:24.348437 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:00.346) 0:03:18.837 **** 2025-09-18 10:39:24.348446 | orchestrator | included: /ansible/roles/k3s_server_post/tasks/cilium.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:39:24.348456 | orchestrator | 2025-09-18 10:39:24.348466 | orchestrator | TASK [k3s_server_post : Create tmp directory on first master] ****************** 2025-09-18 10:39:24.348475 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:00.683) 0:03:19.520 **** 2025-09-18 10:39:24.348485 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348494 | orchestrator | 2025-09-18 10:39:24.348504 | orchestrator | TASK [k3s_server_post : Check if Cilium CLI is installed] ********************** 2025-09-18 10:39:24.348514 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:00.203) 0:03:19.724 **** 2025-09-18 10:39:24.348523 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348533 | orchestrator | 2025-09-18 10:39:24.348542 | orchestrator | TASK [k3s_server_post : Check for Cilium CLI version in command output] ******** 2025-09-18 10:39:24.348552 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:00.190) 0:03:19.915 **** 2025-09-18 10:39:24.348562 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348571 | orchestrator | 2025-09-18 10:39:24.348581 | orchestrator | TASK [k3s_server_post : Get latest stable Cilium CLI version file] ************* 2025-09-18 10:39:24.348591 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:00.208) 0:03:20.123 **** 2025-09-18 10:39:24.348600 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348610 | orchestrator | 2025-09-18 10:39:24.348620 | orchestrator | TASK [k3s_server_post : Read Cilium CLI stable version from file] ************** 2025-09-18 10:39:24.348630 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.216) 0:03:20.340 **** 2025-09-18 10:39:24.348639 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348649 | orchestrator | 2025-09-18 10:39:24.348659 | orchestrator | TASK [k3s_server_post : Log installed Cilium CLI version] ********************** 2025-09-18 10:39:24.348668 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.196) 0:03:20.537 **** 2025-09-18 10:39:24.348678 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348703 | orchestrator | 2025-09-18 10:39:24.348712 | orchestrator | TASK [k3s_server_post : Log latest stable Cilium CLI version] ****************** 2025-09-18 10:39:24.348722 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.186) 0:03:20.724 **** 2025-09-18 10:39:24.348732 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348741 | orchestrator | 2025-09-18 10:39:24.348751 | orchestrator | TASK [k3s_server_post : Determine if Cilium CLI needs installation or update] *** 2025-09-18 10:39:24.348761 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.201) 0:03:20.925 **** 2025-09-18 10:39:24.348776 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348786 | orchestrator | 2025-09-18 10:39:24.348795 | orchestrator | TASK [k3s_server_post : Set architecture variable] ***************************** 2025-09-18 10:39:24.348805 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.215) 0:03:21.140 **** 2025-09-18 10:39:24.348814 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348824 | orchestrator | 2025-09-18 10:39:24.348834 | orchestrator | TASK [k3s_server_post : Download Cilium CLI and checksum] ********************** 2025-09-18 10:39:24.348843 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.175) 0:03:21.316 **** 2025-09-18 10:39:24.348853 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz)  2025-09-18 10:39:24.348863 | orchestrator | skipping: [testbed-node-0] => (item=.tar.gz.sha256sum)  2025-09-18 10:39:24.348873 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348882 | orchestrator | 2025-09-18 10:39:24.348892 | orchestrator | TASK [k3s_server_post : Verify the downloaded tarball] ************************* 2025-09-18 10:39:24.348901 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.615) 0:03:21.931 **** 2025-09-18 10:39:24.348911 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348921 | orchestrator | 2025-09-18 10:39:24.348930 | orchestrator | TASK [k3s_server_post : Extract Cilium CLI to /usr/local/bin] ****************** 2025-09-18 10:39:24.348940 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.196) 0:03:22.128 **** 2025-09-18 10:39:24.348949 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.348959 | orchestrator | 2025-09-18 10:39:24.348973 | orchestrator | TASK [k3s_server_post : Remove downloaded tarball and checksum file] *********** 2025-09-18 10:39:24.348983 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.197) 0:03:22.326 **** 2025-09-18 10:39:24.348993 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349003 | orchestrator | 2025-09-18 10:39:24.349012 | orchestrator | TASK [k3s_server_post : Wait for connectivity to kube VIP] ********************* 2025-09-18 10:39:24.349022 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.195) 0:03:22.521 **** 2025-09-18 10:39:24.349031 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349041 | orchestrator | 2025-09-18 10:39:24.349051 | orchestrator | TASK [k3s_server_post : Fail if kube VIP not reachable] ************************ 2025-09-18 10:39:24.349060 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.195) 0:03:22.717 **** 2025-09-18 10:39:24.349070 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349080 | orchestrator | 2025-09-18 10:39:24.349089 | orchestrator | TASK [k3s_server_post : Test for existing Cilium install] ********************** 2025-09-18 10:39:24.349099 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.208) 0:03:22.925 **** 2025-09-18 10:39:24.349109 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349118 | orchestrator | 2025-09-18 10:39:24.349128 | orchestrator | TASK [k3s_server_post : Check Cilium version] ********************************** 2025-09-18 10:39:24.349138 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.250) 0:03:23.175 **** 2025-09-18 10:39:24.349147 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349157 | orchestrator | 2025-09-18 10:39:24.349166 | orchestrator | TASK [k3s_server_post : Parse installed Cilium version] ************************ 2025-09-18 10:39:24.349176 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.244) 0:03:23.420 **** 2025-09-18 10:39:24.349191 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349200 | orchestrator | 2025-09-18 10:39:24.349210 | orchestrator | TASK [k3s_server_post : Determine if Cilium needs update] ********************** 2025-09-18 10:39:24.349220 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.432) 0:03:23.852 **** 2025-09-18 10:39:24.349229 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349239 | orchestrator | 2025-09-18 10:39:24.349248 | orchestrator | TASK [k3s_server_post : Log result] ******************************************** 2025-09-18 10:39:24.349258 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.311) 0:03:24.163 **** 2025-09-18 10:39:24.349268 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349283 | orchestrator | 2025-09-18 10:39:24.349293 | orchestrator | TASK [k3s_server_post : Install Cilium] **************************************** 2025-09-18 10:39:24.349302 | orchestrator | Thursday 18 September 2025 10:38:57 +0000 (0:00:00.306) 0:03:24.469 **** 2025-09-18 10:39:24.349312 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349322 | orchestrator | 2025-09-18 10:39:24.349331 | orchestrator | TASK [k3s_server_post : Wait for Cilium resources] ***************************** 2025-09-18 10:39:24.349341 | orchestrator | Thursday 18 September 2025 10:38:57 +0000 (0:00:00.219) 0:03:24.689 **** 2025-09-18 10:39:24.349351 | orchestrator | skipping: [testbed-node-0] => (item=deployment/cilium-operator)  2025-09-18 10:39:24.349361 | orchestrator | skipping: [testbed-node-0] => (item=daemonset/cilium)  2025-09-18 10:39:24.349370 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-relay)  2025-09-18 10:39:24.349380 | orchestrator | skipping: [testbed-node-0] => (item=deployment/hubble-ui)  2025-09-18 10:39:24.349390 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349400 | orchestrator | 2025-09-18 10:39:24.349409 | orchestrator | TASK [k3s_server_post : Set _cilium_bgp_neighbors fact] ************************ 2025-09-18 10:39:24.349419 | orchestrator | Thursday 18 September 2025 10:38:58 +0000 (0:00:01.103) 0:03:25.793 **** 2025-09-18 10:39:24.349429 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349438 | orchestrator | 2025-09-18 10:39:24.349448 | orchestrator | TASK [k3s_server_post : Copy BGP manifests to first master] ******************** 2025-09-18 10:39:24.349458 | orchestrator | Thursday 18 September 2025 10:38:58 +0000 (0:00:00.221) 0:03:26.015 **** 2025-09-18 10:39:24.349467 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349477 | orchestrator | 2025-09-18 10:39:24.349487 | orchestrator | TASK [k3s_server_post : Apply BGP manifests] *********************************** 2025-09-18 10:39:24.349496 | orchestrator | Thursday 18 September 2025 10:38:59 +0000 (0:00:00.259) 0:03:26.274 **** 2025-09-18 10:39:24.349506 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349516 | orchestrator | 2025-09-18 10:39:24.349525 | orchestrator | TASK [k3s_server_post : Print error message if BGP manifests application fails] *** 2025-09-18 10:39:24.349535 | orchestrator | Thursday 18 September 2025 10:38:59 +0000 (0:00:00.242) 0:03:26.516 **** 2025-09-18 10:39:24.349544 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349554 | orchestrator | 2025-09-18 10:39:24.349564 | orchestrator | TASK [k3s_server_post : Test for BGP config resources] ************************* 2025-09-18 10:39:24.349574 | orchestrator | Thursday 18 September 2025 10:38:59 +0000 (0:00:00.288) 0:03:26.804 **** 2025-09-18 10:39:24.349583 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumBGPPeeringPolicy.cilium.io)  2025-09-18 10:39:24.349593 | orchestrator | skipping: [testbed-node-0] => (item=kubectl get CiliumLoadBalancerIPPool.cilium.io)  2025-09-18 10:39:24.349603 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349612 | orchestrator | 2025-09-18 10:39:24.349622 | orchestrator | TASK [k3s_server_post : Deploy metallb pool] *********************************** 2025-09-18 10:39:24.349632 | orchestrator | Thursday 18 September 2025 10:38:59 +0000 (0:00:00.441) 0:03:27.246 **** 2025-09-18 10:39:24.349642 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.349651 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.349661 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.349671 | orchestrator | 2025-09-18 10:39:24.349680 | orchestrator | TASK [k3s_server_post : Remove tmp directory used for manifests] *************** 2025-09-18 10:39:24.349704 | orchestrator | Thursday 18 September 2025 10:39:00 +0000 (0:00:00.534) 0:03:27.780 **** 2025-09-18 10:39:24.349714 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.349724 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.349733 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.349743 | orchestrator | 2025-09-18 10:39:24.349753 | orchestrator | PLAY [Apply role k9s] ********************************************************** 2025-09-18 10:39:24.349763 | orchestrator | 2025-09-18 10:39:24.349777 | orchestrator | TASK [k9s : Gather variables for each operating system] ************************ 2025-09-18 10:39:24.349787 | orchestrator | Thursday 18 September 2025 10:39:01 +0000 (0:00:01.378) 0:03:29.159 **** 2025-09-18 10:39:24.349802 | orchestrator | ok: [testbed-manager] 2025-09-18 10:39:24.349812 | orchestrator | 2025-09-18 10:39:24.349822 | orchestrator | TASK [k9s : Include distribution specific install tasks] *********************** 2025-09-18 10:39:24.349832 | orchestrator | Thursday 18 September 2025 10:39:02 +0000 (0:00:00.132) 0:03:29.291 **** 2025-09-18 10:39:24.349841 | orchestrator | included: /ansible/roles/k9s/tasks/install-Debian-family.yml for testbed-manager 2025-09-18 10:39:24.349851 | orchestrator | 2025-09-18 10:39:24.349860 | orchestrator | TASK [k9s : Install k9s packages] ********************************************** 2025-09-18 10:39:24.349870 | orchestrator | Thursday 18 September 2025 10:39:02 +0000 (0:00:00.226) 0:03:29.517 **** 2025-09-18 10:39:24.349879 | orchestrator | changed: [testbed-manager] 2025-09-18 10:39:24.349889 | orchestrator | 2025-09-18 10:39:24.349899 | orchestrator | PLAY [Manage labels, annotations, and taints on all k3s nodes] ***************** 2025-09-18 10:39:24.349908 | orchestrator | 2025-09-18 10:39:24.349918 | orchestrator | TASK [Merge labels, annotations, and taints] *********************************** 2025-09-18 10:39:24.349927 | orchestrator | Thursday 18 September 2025 10:39:07 +0000 (0:00:05.095) 0:03:34.613 **** 2025-09-18 10:39:24.349937 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:39:24.349947 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:39:24.349956 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:39:24.349966 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:39:24.349976 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:39:24.349985 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:39:24.349995 | orchestrator | 2025-09-18 10:39:24.350009 | orchestrator | TASK [Manage labels] *********************************************************** 2025-09-18 10:39:24.350079 | orchestrator | Thursday 18 September 2025 10:39:08 +0000 (0:00:00.725) 0:03:35.338 **** 2025-09-18 10:39:24.350090 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 10:39:24.350100 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 10:39:24.350109 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 10:39:24.350119 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/compute-plane=true) 2025-09-18 10:39:24.350128 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 10:39:24.350138 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/control-plane=true) 2025-09-18 10:39:24.350147 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 10:39:24.350157 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 10:39:24.350166 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 10:39:24.350176 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 10:39:24.350185 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.kubernetes.io/worker=worker) 2025-09-18 10:39:24.350194 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=openstack-control-plane=enabled) 2025-09-18 10:39:24.350204 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 10:39:24.350213 | orchestrator | ok: [testbed-node-5 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 10:39:24.350223 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 10:39:24.350232 | orchestrator | ok: [testbed-node-4 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 10:39:24.350242 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/network-plane=true) 2025-09-18 10:39:24.350251 | orchestrator | ok: [testbed-node-3 -> localhost] => (item=node-role.osism.tech/rook-osd=true) 2025-09-18 10:39:24.350261 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 10:39:24.350277 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 10:39:24.350286 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mds=true) 2025-09-18 10:39:24.350296 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 10:39:24.350305 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 10:39:24.350315 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mgr=true) 2025-09-18 10:39:24.350325 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 10:39:24.350334 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 10:39:24.350344 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-mon=true) 2025-09-18 10:39:24.350353 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 10:39:24.350363 | orchestrator | ok: [testbed-node-1 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 10:39:24.350372 | orchestrator | ok: [testbed-node-2 -> localhost] => (item=node-role.osism.tech/rook-rgw=true) 2025-09-18 10:39:24.350382 | orchestrator | 2025-09-18 10:39:24.350391 | orchestrator | TASK [Manage annotations] ****************************************************** 2025-09-18 10:39:24.350406 | orchestrator | Thursday 18 September 2025 10:39:20 +0000 (0:00:12.403) 0:03:47.742 **** 2025-09-18 10:39:24.350415 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.350425 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.350435 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.350444 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.350454 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.350464 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.350473 | orchestrator | 2025-09-18 10:39:24.350483 | orchestrator | TASK [Manage taints] *********************************************************** 2025-09-18 10:39:24.350492 | orchestrator | Thursday 18 September 2025 10:39:21 +0000 (0:00:00.873) 0:03:48.615 **** 2025-09-18 10:39:24.350502 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:39:24.350511 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:39:24.350521 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:39:24.350530 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:39:24.350540 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:39:24.350550 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:39:24.350559 | orchestrator | 2025-09-18 10:39:24.350569 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:39:24.350579 | orchestrator | testbed-manager : ok=21  changed=11  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:39:24.350590 | orchestrator | testbed-node-0 : ok=42  changed=20  unreachable=0 failed=0 skipped=45  rescued=0 ignored=0 2025-09-18 10:39:24.350606 | orchestrator | testbed-node-1 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 10:39:24.350616 | orchestrator | testbed-node-2 : ok=39  changed=17  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 10:39:24.350626 | orchestrator | testbed-node-3 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 10:39:24.350635 | orchestrator | testbed-node-4 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 10:39:24.350645 | orchestrator | testbed-node-5 : ok=19  changed=9  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-09-18 10:39:24.350660 | orchestrator | 2025-09-18 10:39:24.350670 | orchestrator | 2025-09-18 10:39:24.350679 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:39:24.350706 | orchestrator | Thursday 18 September 2025 10:39:21 +0000 (0:00:00.403) 0:03:49.019 **** 2025-09-18 10:39:24.350716 | orchestrator | =============================================================================== 2025-09-18 10:39:24.350726 | orchestrator | k3s_server : Verify that all nodes actually joined (check k3s-init.service if this fails) -- 54.00s 2025-09-18 10:39:24.350736 | orchestrator | k3s_server : Enable and check K3s service ------------------------------ 26.72s 2025-09-18 10:39:24.350746 | orchestrator | kubectl : Install required packages ------------------------------------ 13.83s 2025-09-18 10:39:24.350755 | orchestrator | k3s_agent : Manage k3s service ----------------------------------------- 13.40s 2025-09-18 10:39:24.350765 | orchestrator | Manage labels ---------------------------------------------------------- 12.40s 2025-09-18 10:39:24.350774 | orchestrator | kubectl : Add repository Debian ----------------------------------------- 6.64s 2025-09-18 10:39:24.350784 | orchestrator | k3s_download : Download k3s binary x64 ---------------------------------- 5.13s 2025-09-18 10:39:24.350794 | orchestrator | k9s : Install k9s packages ---------------------------------------------- 5.10s 2025-09-18 10:39:24.350803 | orchestrator | k3s_server : Remove manifests and folders that are only needed for bootstrapping cluster so k3s doesn't auto apply on start --- 3.24s 2025-09-18 10:39:24.350813 | orchestrator | k3s_server : Init cluster inside the transient k3s-init service --------- 3.14s 2025-09-18 10:39:24.350823 | orchestrator | k3s_server : Set _kube_vip_bgp_peers fact ------------------------------- 2.74s 2025-09-18 10:39:24.350833 | orchestrator | k3s_prereq : Enable IPv4 forwarding ------------------------------------- 2.71s 2025-09-18 10:39:24.350842 | orchestrator | k3s_prereq : Enable IPv6 router advertisements -------------------------- 2.22s 2025-09-18 10:39:24.350852 | orchestrator | k3s_custom_registries : Create directory /etc/rancher/k3s --------------- 2.19s 2025-09-18 10:39:24.350862 | orchestrator | k3s_server : Copy vip manifest to first master -------------------------- 2.11s 2025-09-18 10:39:24.350871 | orchestrator | kubectl : Install apt-transport-https package --------------------------- 1.98s 2025-09-18 10:39:24.350881 | orchestrator | k3s_custom_registries : Insert registries into /etc/rancher/k3s/registries.yaml --- 1.97s 2025-09-18 10:39:24.350891 | orchestrator | k3s_server : Copy K3s service file -------------------------------------- 1.76s 2025-09-18 10:39:24.350900 | orchestrator | k3s_download : Download k3s binary armhf -------------------------------- 1.71s 2025-09-18 10:39:24.350910 | orchestrator | k3s_server : Register node-token file access mode ----------------------- 1.70s 2025-09-18 10:39:24.350919 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:24.350929 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:24.350943 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task bc27d0bd-372e-4bca-8d7c-a9c3cb8e0514 is in state STARTED 2025-09-18 10:39:24.350954 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task b94570a4-9d86-4f7e-8a6e-d2f3f42cd53e is in state STARTED 2025-09-18 10:39:24.350963 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task 916d666e-fc79-48e3-bcc0-8f11c5c1faae is in state SUCCESS 2025-09-18 10:39:24.350973 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:24.350983 | orchestrator | 2025-09-18 10:39:24 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:24.350992 | orchestrator | 2025-09-18 10:39:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:27.380104 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:27.380214 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:27.380254 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task bc27d0bd-372e-4bca-8d7c-a9c3cb8e0514 is in state STARTED 2025-09-18 10:39:27.380857 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task b94570a4-9d86-4f7e-8a6e-d2f3f42cd53e is in state STARTED 2025-09-18 10:39:27.383571 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:27.384145 | orchestrator | 2025-09-18 10:39:27 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:27.384178 | orchestrator | 2025-09-18 10:39:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:30.412917 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:30.414222 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:30.414838 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task bc27d0bd-372e-4bca-8d7c-a9c3cb8e0514 is in state SUCCESS 2025-09-18 10:39:30.417385 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task b94570a4-9d86-4f7e-8a6e-d2f3f42cd53e is in state STARTED 2025-09-18 10:39:30.418163 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:30.419071 | orchestrator | 2025-09-18 10:39:30 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:30.419091 | orchestrator | 2025-09-18 10:39:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:33.530414 | orchestrator | 2025-09-18 10:39:33 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:33.531032 | orchestrator | 2025-09-18 10:39:33 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:33.531827 | orchestrator | 2025-09-18 10:39:33 | INFO  | Task b94570a4-9d86-4f7e-8a6e-d2f3f42cd53e is in state SUCCESS 2025-09-18 10:39:33.533077 | orchestrator | 2025-09-18 10:39:33 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:33.534236 | orchestrator | 2025-09-18 10:39:33 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:33.534273 | orchestrator | 2025-09-18 10:39:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:36.587727 | orchestrator | 2025-09-18 10:39:36 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:36.587940 | orchestrator | 2025-09-18 10:39:36 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:36.590807 | orchestrator | 2025-09-18 10:39:36 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:36.591425 | orchestrator | 2025-09-18 10:39:36 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:36.591553 | orchestrator | 2025-09-18 10:39:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:39.631477 | orchestrator | 2025-09-18 10:39:39 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:39.631586 | orchestrator | 2025-09-18 10:39:39 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:39.632074 | orchestrator | 2025-09-18 10:39:39 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:39.632997 | orchestrator | 2025-09-18 10:39:39 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:39.633019 | orchestrator | 2025-09-18 10:39:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:42.675375 | orchestrator | 2025-09-18 10:39:42 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:42.676115 | orchestrator | 2025-09-18 10:39:42 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:42.677365 | orchestrator | 2025-09-18 10:39:42 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:42.678493 | orchestrator | 2025-09-18 10:39:42 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:42.678519 | orchestrator | 2025-09-18 10:39:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:45.731072 | orchestrator | 2025-09-18 10:39:45 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:45.732142 | orchestrator | 2025-09-18 10:39:45 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:45.733593 | orchestrator | 2025-09-18 10:39:45 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:45.735239 | orchestrator | 2025-09-18 10:39:45 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:45.735325 | orchestrator | 2025-09-18 10:39:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:48.775397 | orchestrator | 2025-09-18 10:39:48 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:48.776880 | orchestrator | 2025-09-18 10:39:48 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:48.776910 | orchestrator | 2025-09-18 10:39:48 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:48.777454 | orchestrator | 2025-09-18 10:39:48 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:48.778499 | orchestrator | 2025-09-18 10:39:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:51.824942 | orchestrator | 2025-09-18 10:39:51 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:51.825224 | orchestrator | 2025-09-18 10:39:51 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:51.826406 | orchestrator | 2025-09-18 10:39:51 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:51.831198 | orchestrator | 2025-09-18 10:39:51 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:51.831231 | orchestrator | 2025-09-18 10:39:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:54.869772 | orchestrator | 2025-09-18 10:39:54 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:54.869890 | orchestrator | 2025-09-18 10:39:54 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:54.872751 | orchestrator | 2025-09-18 10:39:54 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:54.876244 | orchestrator | 2025-09-18 10:39:54 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:54.876351 | orchestrator | 2025-09-18 10:39:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:39:57.910172 | orchestrator | 2025-09-18 10:39:57 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:39:57.911165 | orchestrator | 2025-09-18 10:39:57 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:39:57.912745 | orchestrator | 2025-09-18 10:39:57 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:39:57.913723 | orchestrator | 2025-09-18 10:39:57 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:39:57.913770 | orchestrator | 2025-09-18 10:39:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:00.966093 | orchestrator | 2025-09-18 10:40:00 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:00.966541 | orchestrator | 2025-09-18 10:40:00 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:00.968229 | orchestrator | 2025-09-18 10:40:00 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:00.969156 | orchestrator | 2025-09-18 10:40:00 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:00.969346 | orchestrator | 2025-09-18 10:40:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:04.009415 | orchestrator | 2025-09-18 10:40:04 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:04.012704 | orchestrator | 2025-09-18 10:40:04 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:04.015275 | orchestrator | 2025-09-18 10:40:04 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:04.017572 | orchestrator | 2025-09-18 10:40:04 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:04.017787 | orchestrator | 2025-09-18 10:40:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:07.055538 | orchestrator | 2025-09-18 10:40:07 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:07.056778 | orchestrator | 2025-09-18 10:40:07 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:07.059445 | orchestrator | 2025-09-18 10:40:07 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:07.063002 | orchestrator | 2025-09-18 10:40:07 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:07.063026 | orchestrator | 2025-09-18 10:40:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:10.102253 | orchestrator | 2025-09-18 10:40:10 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:10.105950 | orchestrator | 2025-09-18 10:40:10 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:10.107943 | orchestrator | 2025-09-18 10:40:10 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:10.111130 | orchestrator | 2025-09-18 10:40:10 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:10.111211 | orchestrator | 2025-09-18 10:40:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:13.154960 | orchestrator | 2025-09-18 10:40:13 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:13.155592 | orchestrator | 2025-09-18 10:40:13 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:13.156764 | orchestrator | 2025-09-18 10:40:13 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:13.157873 | orchestrator | 2025-09-18 10:40:13 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:13.158010 | orchestrator | 2025-09-18 10:40:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:16.201137 | orchestrator | 2025-09-18 10:40:16 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:16.201724 | orchestrator | 2025-09-18 10:40:16 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:16.202136 | orchestrator | 2025-09-18 10:40:16 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:16.202954 | orchestrator | 2025-09-18 10:40:16 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:16.202981 | orchestrator | 2025-09-18 10:40:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:19.231399 | orchestrator | 2025-09-18 10:40:19 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:19.231503 | orchestrator | 2025-09-18 10:40:19 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:19.231790 | orchestrator | 2025-09-18 10:40:19 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:19.232316 | orchestrator | 2025-09-18 10:40:19 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:19.232422 | orchestrator | 2025-09-18 10:40:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:22.256887 | orchestrator | 2025-09-18 10:40:22 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:22.257342 | orchestrator | 2025-09-18 10:40:22 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:22.258832 | orchestrator | 2025-09-18 10:40:22 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:22.261896 | orchestrator | 2025-09-18 10:40:22 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:22.262487 | orchestrator | 2025-09-18 10:40:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:25.288753 | orchestrator | 2025-09-18 10:40:25 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:25.289706 | orchestrator | 2025-09-18 10:40:25 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:25.290544 | orchestrator | 2025-09-18 10:40:25 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:25.291724 | orchestrator | 2025-09-18 10:40:25 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:25.291753 | orchestrator | 2025-09-18 10:40:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:28.319079 | orchestrator | 2025-09-18 10:40:28 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:28.319263 | orchestrator | 2025-09-18 10:40:28 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:28.321796 | orchestrator | 2025-09-18 10:40:28 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:28.322406 | orchestrator | 2025-09-18 10:40:28 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:28.322524 | orchestrator | 2025-09-18 10:40:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:31.354112 | orchestrator | 2025-09-18 10:40:31 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:31.355828 | orchestrator | 2025-09-18 10:40:31 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:31.359662 | orchestrator | 2025-09-18 10:40:31 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:31.361348 | orchestrator | 2025-09-18 10:40:31 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:31.361613 | orchestrator | 2025-09-18 10:40:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:34.408079 | orchestrator | 2025-09-18 10:40:34 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:34.409402 | orchestrator | 2025-09-18 10:40:34 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:34.410918 | orchestrator | 2025-09-18 10:40:34 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:34.412857 | orchestrator | 2025-09-18 10:40:34 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:34.413144 | orchestrator | 2025-09-18 10:40:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:37.447119 | orchestrator | 2025-09-18 10:40:37 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:37.447291 | orchestrator | 2025-09-18 10:40:37 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:37.448005 | orchestrator | 2025-09-18 10:40:37 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:37.448619 | orchestrator | 2025-09-18 10:40:37 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:37.448637 | orchestrator | 2025-09-18 10:40:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:40.488175 | orchestrator | 2025-09-18 10:40:40 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:40.488639 | orchestrator | 2025-09-18 10:40:40 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:40.489370 | orchestrator | 2025-09-18 10:40:40 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:40.490429 | orchestrator | 2025-09-18 10:40:40 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:40.490580 | orchestrator | 2025-09-18 10:40:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:43.525395 | orchestrator | 2025-09-18 10:40:43 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:43.525939 | orchestrator | 2025-09-18 10:40:43 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state STARTED 2025-09-18 10:40:43.528795 | orchestrator | 2025-09-18 10:40:43 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:43.531762 | orchestrator | 2025-09-18 10:40:43 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:43.531787 | orchestrator | 2025-09-18 10:40:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:46.570770 | orchestrator | 2025-09-18 10:40:46 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:46.574501 | orchestrator | 2025-09-18 10:40:46 | INFO  | Task d5a24da7-39ca-44b7-97c5-3a4c8f696f0d is in state SUCCESS 2025-09-18 10:40:46.576075 | orchestrator | 2025-09-18 10:40:46.576113 | orchestrator | 2025-09-18 10:40:46.576126 | orchestrator | PLAY [Copy kubeconfig to the configuration repository] ************************* 2025-09-18 10:40:46.576138 | orchestrator | 2025-09-18 10:40:46.576150 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 10:40:46.576161 | orchestrator | Thursday 18 September 2025 10:39:25 +0000 (0:00:00.163) 0:00:00.163 **** 2025-09-18 10:40:46.576173 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 10:40:46.576185 | orchestrator | 2025-09-18 10:40:46.576196 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 10:40:46.576207 | orchestrator | Thursday 18 September 2025 10:39:26 +0000 (0:00:00.708) 0:00:00.871 **** 2025-09-18 10:40:46.576218 | orchestrator | changed: [testbed-manager] 2025-09-18 10:40:46.576229 | orchestrator | 2025-09-18 10:40:46.576240 | orchestrator | TASK [Change server address in the kubeconfig file] **************************** 2025-09-18 10:40:46.576278 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:01.213) 0:00:02.084 **** 2025-09-18 10:40:46.576290 | orchestrator | changed: [testbed-manager] 2025-09-18 10:40:46.576301 | orchestrator | 2025-09-18 10:40:46.576312 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:40:46.576324 | orchestrator | testbed-manager : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:40:46.576337 | orchestrator | 2025-09-18 10:40:46.576348 | orchestrator | 2025-09-18 10:40:46.576359 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:40:46.576370 | orchestrator | Thursday 18 September 2025 10:39:28 +0000 (0:00:00.553) 0:00:02.638 **** 2025-09-18 10:40:46.576381 | orchestrator | =============================================================================== 2025-09-18 10:40:46.576391 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.21s 2025-09-18 10:40:46.576402 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.71s 2025-09-18 10:40:46.576413 | orchestrator | Change server address in the kubeconfig file ---------------------------- 0.56s 2025-09-18 10:40:46.576424 | orchestrator | 2025-09-18 10:40:46.576434 | orchestrator | 2025-09-18 10:40:46.576445 | orchestrator | PLAY [Prepare kubeconfig file] ************************************************* 2025-09-18 10:40:46.576456 | orchestrator | 2025-09-18 10:40:46.576467 | orchestrator | TASK [Get home directory of operator user] ************************************* 2025-09-18 10:40:46.576478 | orchestrator | Thursday 18 September 2025 10:39:25 +0000 (0:00:00.132) 0:00:00.132 **** 2025-09-18 10:40:46.576488 | orchestrator | ok: [testbed-manager] 2025-09-18 10:40:46.576500 | orchestrator | 2025-09-18 10:40:46.576511 | orchestrator | TASK [Create .kube directory] ************************************************** 2025-09-18 10:40:46.576522 | orchestrator | Thursday 18 September 2025 10:39:26 +0000 (0:00:00.674) 0:00:00.807 **** 2025-09-18 10:40:46.576565 | orchestrator | ok: [testbed-manager] 2025-09-18 10:40:46.576576 | orchestrator | 2025-09-18 10:40:46.576587 | orchestrator | TASK [Get kubeconfig file] ***************************************************** 2025-09-18 10:40:46.576598 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:00.728) 0:00:01.535 **** 2025-09-18 10:40:46.576609 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] 2025-09-18 10:40:46.576620 | orchestrator | 2025-09-18 10:40:46.576631 | orchestrator | TASK [Write kubeconfig file] *************************************************** 2025-09-18 10:40:46.576642 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:00.696) 0:00:02.231 **** 2025-09-18 10:40:46.576653 | orchestrator | changed: [testbed-manager] 2025-09-18 10:40:46.576664 | orchestrator | 2025-09-18 10:40:46.576676 | orchestrator | TASK [Change server address in the kubeconfig] ********************************* 2025-09-18 10:40:46.576689 | orchestrator | Thursday 18 September 2025 10:39:29 +0000 (0:00:01.164) 0:00:03.396 **** 2025-09-18 10:40:46.576701 | orchestrator | changed: [testbed-manager] 2025-09-18 10:40:46.576713 | orchestrator | 2025-09-18 10:40:46.576725 | orchestrator | TASK [Make kubeconfig available for use inside the manager service] ************ 2025-09-18 10:40:46.576736 | orchestrator | Thursday 18 September 2025 10:39:29 +0000 (0:00:00.783) 0:00:04.179 **** 2025-09-18 10:40:46.576748 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 10:40:46.576760 | orchestrator | 2025-09-18 10:40:46.576773 | orchestrator | TASK [Change server address in the kubeconfig inside the manager service] ****** 2025-09-18 10:40:46.576784 | orchestrator | Thursday 18 September 2025 10:39:31 +0000 (0:00:01.781) 0:00:05.961 **** 2025-09-18 10:40:46.576796 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 10:40:46.576935 | orchestrator | 2025-09-18 10:40:46.576961 | orchestrator | TASK [Set KUBECONFIG environment variable] ************************************* 2025-09-18 10:40:46.576981 | orchestrator | Thursday 18 September 2025 10:39:32 +0000 (0:00:00.703) 0:00:06.664 **** 2025-09-18 10:40:46.576996 | orchestrator | ok: [testbed-manager] 2025-09-18 10:40:46.577008 | orchestrator | 2025-09-18 10:40:46.577021 | orchestrator | TASK [Enable kubectl command line completion] ********************************** 2025-09-18 10:40:46.577043 | orchestrator | Thursday 18 September 2025 10:39:32 +0000 (0:00:00.393) 0:00:07.057 **** 2025-09-18 10:40:46.577054 | orchestrator | ok: [testbed-manager] 2025-09-18 10:40:46.577065 | orchestrator | 2025-09-18 10:40:46.577076 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:40:46.577087 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:40:46.577099 | orchestrator | 2025-09-18 10:40:46.577110 | orchestrator | 2025-09-18 10:40:46.577121 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:40:46.577132 | orchestrator | Thursday 18 September 2025 10:39:33 +0000 (0:00:00.281) 0:00:07.338 **** 2025-09-18 10:40:46.577143 | orchestrator | =============================================================================== 2025-09-18 10:40:46.577154 | orchestrator | Make kubeconfig available for use inside the manager service ------------ 1.78s 2025-09-18 10:40:46.577164 | orchestrator | Write kubeconfig file --------------------------------------------------- 1.16s 2025-09-18 10:40:46.577185 | orchestrator | Change server address in the kubeconfig --------------------------------- 0.78s 2025-09-18 10:40:46.577210 | orchestrator | Create .kube directory -------------------------------------------------- 0.73s 2025-09-18 10:40:46.577222 | orchestrator | Change server address in the kubeconfig inside the manager service ------ 0.70s 2025-09-18 10:40:46.577233 | orchestrator | Get kubeconfig file ----------------------------------------------------- 0.70s 2025-09-18 10:40:46.577244 | orchestrator | Get home directory of operator user ------------------------------------- 0.67s 2025-09-18 10:40:46.577255 | orchestrator | Set KUBECONFIG environment variable ------------------------------------- 0.39s 2025-09-18 10:40:46.577265 | orchestrator | Enable kubectl command line completion ---------------------------------- 0.28s 2025-09-18 10:40:46.577276 | orchestrator | 2025-09-18 10:40:46.577287 | orchestrator | 2025-09-18 10:40:46.577297 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-09-18 10:40:46.577308 | orchestrator | 2025-09-18 10:40:46.577319 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-18 10:40:46.577330 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:00.133) 0:00:00.133 **** 2025-09-18 10:40:46.577341 | orchestrator | ok: [localhost] => { 2025-09-18 10:40:46.577352 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-09-18 10:40:46.577364 | orchestrator | } 2025-09-18 10:40:46.577375 | orchestrator | 2025-09-18 10:40:46.577385 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-09-18 10:40:46.577396 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:00.062) 0:00:00.195 **** 2025-09-18 10:40:46.577408 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 1, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-09-18 10:40:46.577420 | orchestrator | ...ignoring 2025-09-18 10:40:46.577431 | orchestrator | 2025-09-18 10:40:46.577442 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-09-18 10:40:46.577453 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:02.117) 0:00:02.313 **** 2025-09-18 10:40:46.577464 | orchestrator | skipping: [localhost] 2025-09-18 10:40:46.577475 | orchestrator | 2025-09-18 10:40:46.577486 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-09-18 10:40:46.577496 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.062) 0:00:02.376 **** 2025-09-18 10:40:46.577507 | orchestrator | ok: [localhost] 2025-09-18 10:40:46.577518 | orchestrator | 2025-09-18 10:40:46.577552 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:40:46.577563 | orchestrator | 2025-09-18 10:40:46.577574 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:40:46.577585 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.387) 0:00:02.764 **** 2025-09-18 10:40:46.577604 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:40:46.577615 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:40:46.577625 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:40:46.577636 | orchestrator | 2025-09-18 10:40:46.577647 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:40:46.577658 | orchestrator | Thursday 18 September 2025 10:38:25 +0000 (0:00:00.904) 0:00:03.669 **** 2025-09-18 10:40:46.577668 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-09-18 10:40:46.577680 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-09-18 10:40:46.577691 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-09-18 10:40:46.577701 | orchestrator | 2025-09-18 10:40:46.577712 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-09-18 10:40:46.577723 | orchestrator | 2025-09-18 10:40:46.577734 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 10:40:46.577744 | orchestrator | Thursday 18 September 2025 10:38:26 +0000 (0:00:01.355) 0:00:05.025 **** 2025-09-18 10:40:46.577755 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:40:46.577766 | orchestrator | 2025-09-18 10:40:46.577777 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-18 10:40:46.577788 | orchestrator | Thursday 18 September 2025 10:38:27 +0000 (0:00:01.002) 0:00:06.027 **** 2025-09-18 10:40:46.577798 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:40:46.577809 | orchestrator | 2025-09-18 10:40:46.577820 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-09-18 10:40:46.577831 | orchestrator | Thursday 18 September 2025 10:38:29 +0000 (0:00:01.792) 0:00:07.819 **** 2025-09-18 10:40:46.577842 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.577853 | orchestrator | 2025-09-18 10:40:46.577864 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-09-18 10:40:46.577874 | orchestrator | Thursday 18 September 2025 10:38:30 +0000 (0:00:00.362) 0:00:08.182 **** 2025-09-18 10:40:46.577885 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.577896 | orchestrator | 2025-09-18 10:40:46.577907 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-09-18 10:40:46.577918 | orchestrator | Thursday 18 September 2025 10:38:30 +0000 (0:00:00.616) 0:00:08.799 **** 2025-09-18 10:40:46.577928 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.577939 | orchestrator | 2025-09-18 10:40:46.577950 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-09-18 10:40:46.577961 | orchestrator | Thursday 18 September 2025 10:38:31 +0000 (0:00:00.407) 0:00:09.207 **** 2025-09-18 10:40:46.577972 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.577982 | orchestrator | 2025-09-18 10:40:46.577993 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 10:40:46.578004 | orchestrator | Thursday 18 September 2025 10:38:31 +0000 (0:00:00.478) 0:00:09.685 **** 2025-09-18 10:40:46.578060 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:40:46.578083 | orchestrator | 2025-09-18 10:40:46.578103 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-09-18 10:40:46.578157 | orchestrator | Thursday 18 September 2025 10:38:32 +0000 (0:00:00.850) 0:00:10.535 **** 2025-09-18 10:40:46.578170 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:40:46.578181 | orchestrator | 2025-09-18 10:40:46.578192 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-09-18 10:40:46.578210 | orchestrator | Thursday 18 September 2025 10:38:33 +0000 (0:00:01.106) 0:00:11.642 **** 2025-09-18 10:40:46.578228 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.578248 | orchestrator | 2025-09-18 10:40:46.578269 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-09-18 10:40:46.578289 | orchestrator | Thursday 18 September 2025 10:38:34 +0000 (0:00:00.552) 0:00:12.195 **** 2025-09-18 10:40:46.578309 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.578320 | orchestrator | 2025-09-18 10:40:46.578331 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-09-18 10:40:46.578342 | orchestrator | Thursday 18 September 2025 10:38:34 +0000 (0:00:00.354) 0:00:12.550 **** 2025-09-18 10:40:46.578359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578403 | orchestrator | 2025-09-18 10:40:46.578414 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-09-18 10:40:46.578425 | orchestrator | Thursday 18 September 2025 10:38:35 +0000 (0:00:01.341) 0:00:13.892 **** 2025-09-18 10:40:46.578446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578573 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.578616 | orchestrator | 2025-09-18 10:40:46.578628 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-09-18 10:40:46.578639 | orchestrator | Thursday 18 September 2025 10:38:38 +0000 (0:00:02.797) 0:00:16.689 **** 2025-09-18 10:40:46.578650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 10:40:46.578669 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 10:40:46.578687 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-09-18 10:40:46.578705 | orchestrator | 2025-09-18 10:40:46.578724 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-09-18 10:40:46.578741 | orchestrator | Thursday 18 September 2025 10:38:40 +0000 (0:00:02.213) 0:00:18.903 **** 2025-09-18 10:40:46.578759 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 10:40:46.578781 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 10:40:46.578803 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-09-18 10:40:46.578814 | orchestrator | 2025-09-18 10:40:46.578825 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-09-18 10:40:46.578853 | orchestrator | Thursday 18 September 2025 10:38:43 +0000 (0:00:02.630) 0:00:21.533 **** 2025-09-18 10:40:46.578865 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 10:40:46.578876 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 10:40:46.578886 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-09-18 10:40:46.578897 | orchestrator | 2025-09-18 10:40:46.578908 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-09-18 10:40:46.578919 | orchestrator | Thursday 18 September 2025 10:38:45 +0000 (0:00:01.607) 0:00:23.141 **** 2025-09-18 10:40:46.578929 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 10:40:46.578940 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 10:40:46.578951 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-09-18 10:40:46.578962 | orchestrator | 2025-09-18 10:40:46.578972 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-09-18 10:40:46.578983 | orchestrator | Thursday 18 September 2025 10:38:47 +0000 (0:00:02.586) 0:00:25.728 **** 2025-09-18 10:40:46.578993 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 10:40:46.579008 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 10:40:46.579026 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-09-18 10:40:46.579045 | orchestrator | 2025-09-18 10:40:46.579063 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-09-18 10:40:46.579081 | orchestrator | Thursday 18 September 2025 10:38:49 +0000 (0:00:01.822) 0:00:27.550 **** 2025-09-18 10:40:46.579101 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 10:40:46.579112 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 10:40:46.579123 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-09-18 10:40:46.579134 | orchestrator | 2025-09-18 10:40:46.579145 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-09-18 10:40:46.579155 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:02.088) 0:00:29.638 **** 2025-09-18 10:40:46.579166 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:40:46.579177 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.579188 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:40:46.579199 | orchestrator | 2025-09-18 10:40:46.579210 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-09-18 10:40:46.579221 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:00.465) 0:00:30.104 **** 2025-09-18 10:40:46.579233 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.579269 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.579284 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:40:46.579296 | orchestrator | 2025-09-18 10:40:46.579307 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-09-18 10:40:46.579317 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:01.679) 0:00:31.783 **** 2025-09-18 10:40:46.579328 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:40:46.579339 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:40:46.579350 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:40:46.579362 | orchestrator | 2025-09-18 10:40:46.579381 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-09-18 10:40:46.579400 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.954) 0:00:32.738 **** 2025-09-18 10:40:46.579418 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:40:46.579436 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:40:46.579448 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:40:46.579459 | orchestrator | 2025-09-18 10:40:46.579470 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-09-18 10:40:46.579481 | orchestrator | Thursday 18 September 2025 10:39:02 +0000 (0:00:07.992) 0:00:40.730 **** 2025-09-18 10:40:46.579492 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:40:46.579502 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:40:46.579513 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:40:46.579524 | orchestrator | 2025-09-18 10:40:46.579559 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 10:40:46.579570 | orchestrator | 2025-09-18 10:40:46.579581 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 10:40:46.579600 | orchestrator | Thursday 18 September 2025 10:39:03 +0000 (0:00:00.713) 0:00:41.443 **** 2025-09-18 10:40:46.579611 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:40:46.579622 | orchestrator | 2025-09-18 10:40:46.579633 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 10:40:46.579644 | orchestrator | Thursday 18 September 2025 10:39:03 +0000 (0:00:00.628) 0:00:42.072 **** 2025-09-18 10:40:46.579655 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:40:46.579666 | orchestrator | 2025-09-18 10:40:46.579677 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 10:40:46.579687 | orchestrator | Thursday 18 September 2025 10:39:04 +0000 (0:00:00.225) 0:00:42.297 **** 2025-09-18 10:40:46.579698 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:40:46.579709 | orchestrator | 2025-09-18 10:40:46.579720 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 10:40:46.579731 | orchestrator | Thursday 18 September 2025 10:39:06 +0000 (0:00:01.958) 0:00:44.255 **** 2025-09-18 10:40:46.579742 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:40:46.579753 | orchestrator | 2025-09-18 10:40:46.579764 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 10:40:46.579775 | orchestrator | 2025-09-18 10:40:46.579786 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 10:40:46.579797 | orchestrator | Thursday 18 September 2025 10:40:02 +0000 (0:00:55.879) 0:01:40.135 **** 2025-09-18 10:40:46.579808 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:40:46.579819 | orchestrator | 2025-09-18 10:40:46.579829 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 10:40:46.579840 | orchestrator | Thursday 18 September 2025 10:40:02 +0000 (0:00:00.619) 0:01:40.755 **** 2025-09-18 10:40:46.579851 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:40:46.579862 | orchestrator | 2025-09-18 10:40:46.579873 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 10:40:46.579884 | orchestrator | Thursday 18 September 2025 10:40:02 +0000 (0:00:00.251) 0:01:41.007 **** 2025-09-18 10:40:46.579895 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:40:46.579906 | orchestrator | 2025-09-18 10:40:46.579917 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 10:40:46.579928 | orchestrator | Thursday 18 September 2025 10:40:09 +0000 (0:00:07.013) 0:01:48.021 **** 2025-09-18 10:40:46.579938 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:40:46.579949 | orchestrator | 2025-09-18 10:40:46.579960 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-09-18 10:40:46.579971 | orchestrator | 2025-09-18 10:40:46.579982 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-09-18 10:40:46.579993 | orchestrator | Thursday 18 September 2025 10:40:22 +0000 (0:00:12.582) 0:02:00.604 **** 2025-09-18 10:40:46.580004 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:40:46.580015 | orchestrator | 2025-09-18 10:40:46.580038 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-09-18 10:40:46.580050 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:00.649) 0:02:01.253 **** 2025-09-18 10:40:46.580061 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:40:46.580072 | orchestrator | 2025-09-18 10:40:46.580083 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-09-18 10:40:46.580094 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:00.209) 0:02:01.462 **** 2025-09-18 10:40:46.580105 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:40:46.580116 | orchestrator | 2025-09-18 10:40:46.580127 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-09-18 10:40:46.580138 | orchestrator | Thursday 18 September 2025 10:40:24 +0000 (0:00:01.540) 0:02:03.002 **** 2025-09-18 10:40:46.580149 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:40:46.580160 | orchestrator | 2025-09-18 10:40:46.580170 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-09-18 10:40:46.580188 | orchestrator | 2025-09-18 10:40:46.580200 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-09-18 10:40:46.580210 | orchestrator | Thursday 18 September 2025 10:40:41 +0000 (0:00:16.539) 0:02:19.542 **** 2025-09-18 10:40:46.580458 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:40:46.580473 | orchestrator | 2025-09-18 10:40:46.580484 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-09-18 10:40:46.580495 | orchestrator | Thursday 18 September 2025 10:40:41 +0000 (0:00:00.566) 0:02:20.109 **** 2025-09-18 10:40:46.580506 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-18 10:40:46.580517 | orchestrator | enable_outward_rabbitmq_True 2025-09-18 10:40:46.580643 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-09-18 10:40:46.580655 | orchestrator | outward_rabbitmq_restart 2025-09-18 10:40:46.580666 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:40:46.580677 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:40:46.580688 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:40:46.580699 | orchestrator | 2025-09-18 10:40:46.580710 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-09-18 10:40:46.580720 | orchestrator | skipping: no hosts matched 2025-09-18 10:40:46.580731 | orchestrator | 2025-09-18 10:40:46.580742 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-09-18 10:40:46.580753 | orchestrator | skipping: no hosts matched 2025-09-18 10:40:46.580764 | orchestrator | 2025-09-18 10:40:46.580774 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-09-18 10:40:46.580785 | orchestrator | skipping: no hosts matched 2025-09-18 10:40:46.580796 | orchestrator | 2025-09-18 10:40:46.580807 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:40:46.580818 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-18 10:40:46.580830 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 10:40:46.580841 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:40:46.580852 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:40:46.580863 | orchestrator | 2025-09-18 10:40:46.580874 | orchestrator | 2025-09-18 10:40:46.580885 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:40:46.580896 | orchestrator | Thursday 18 September 2025 10:40:44 +0000 (0:00:02.743) 0:02:22.852 **** 2025-09-18 10:40:46.580907 | orchestrator | =============================================================================== 2025-09-18 10:40:46.580918 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 85.00s 2025-09-18 10:40:46.580929 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 10.51s 2025-09-18 10:40:46.580939 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 7.99s 2025-09-18 10:40:46.580950 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.80s 2025-09-18 10:40:46.580961 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.74s 2025-09-18 10:40:46.580972 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.63s 2025-09-18 10:40:46.580985 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.59s 2025-09-18 10:40:46.581003 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 2.21s 2025-09-18 10:40:46.581022 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.12s 2025-09-18 10:40:46.581039 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 2.09s 2025-09-18 10:40:46.581071 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 1.90s 2025-09-18 10:40:46.581085 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.82s 2025-09-18 10:40:46.581097 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.79s 2025-09-18 10:40:46.581109 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.68s 2025-09-18 10:40:46.581122 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.61s 2025-09-18 10:40:46.581134 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.36s 2025-09-18 10:40:46.581146 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.34s 2025-09-18 10:40:46.581174 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.11s 2025-09-18 10:40:46.581194 | orchestrator | rabbitmq : include_tasks ------------------------------------------------ 1.00s 2025-09-18 10:40:46.581213 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 0.95s 2025-09-18 10:40:46.581232 | orchestrator | 2025-09-18 10:40:46 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:46.581252 | orchestrator | 2025-09-18 10:40:46 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:46.581272 | orchestrator | 2025-09-18 10:40:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:49.628847 | orchestrator | 2025-09-18 10:40:49 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:49.628944 | orchestrator | 2025-09-18 10:40:49 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:49.629616 | orchestrator | 2025-09-18 10:40:49 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:49.629785 | orchestrator | 2025-09-18 10:40:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:52.655374 | orchestrator | 2025-09-18 10:40:52 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:52.655478 | orchestrator | 2025-09-18 10:40:52 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:52.656262 | orchestrator | 2025-09-18 10:40:52 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:52.656287 | orchestrator | 2025-09-18 10:40:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:55.691445 | orchestrator | 2025-09-18 10:40:55 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:55.692495 | orchestrator | 2025-09-18 10:40:55 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:55.693610 | orchestrator | 2025-09-18 10:40:55 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:55.693635 | orchestrator | 2025-09-18 10:40:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:40:58.738471 | orchestrator | 2025-09-18 10:40:58 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:40:58.740615 | orchestrator | 2025-09-18 10:40:58 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:40:58.742251 | orchestrator | 2025-09-18 10:40:58 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:40:58.742627 | orchestrator | 2025-09-18 10:40:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:01.792094 | orchestrator | 2025-09-18 10:41:01 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:01.794270 | orchestrator | 2025-09-18 10:41:01 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:01.796567 | orchestrator | 2025-09-18 10:41:01 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:01.796861 | orchestrator | 2025-09-18 10:41:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:04.841928 | orchestrator | 2025-09-18 10:41:04 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:04.842113 | orchestrator | 2025-09-18 10:41:04 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:04.842716 | orchestrator | 2025-09-18 10:41:04 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:04.842814 | orchestrator | 2025-09-18 10:41:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:07.892789 | orchestrator | 2025-09-18 10:41:07 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:07.893056 | orchestrator | 2025-09-18 10:41:07 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:07.894454 | orchestrator | 2025-09-18 10:41:07 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:07.894748 | orchestrator | 2025-09-18 10:41:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:10.930294 | orchestrator | 2025-09-18 10:41:10 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:10.930648 | orchestrator | 2025-09-18 10:41:10 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:10.933125 | orchestrator | 2025-09-18 10:41:10 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:10.933149 | orchestrator | 2025-09-18 10:41:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:13.963571 | orchestrator | 2025-09-18 10:41:13 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:13.963706 | orchestrator | 2025-09-18 10:41:13 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:13.964446 | orchestrator | 2025-09-18 10:41:13 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:13.964496 | orchestrator | 2025-09-18 10:41:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:17.005952 | orchestrator | 2025-09-18 10:41:17 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:17.008042 | orchestrator | 2025-09-18 10:41:17 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:17.009873 | orchestrator | 2025-09-18 10:41:17 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:17.009890 | orchestrator | 2025-09-18 10:41:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:20.064310 | orchestrator | 2025-09-18 10:41:20 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:20.066518 | orchestrator | 2025-09-18 10:41:20 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:20.068693 | orchestrator | 2025-09-18 10:41:20 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:20.068734 | orchestrator | 2025-09-18 10:41:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:23.113025 | orchestrator | 2025-09-18 10:41:23 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:23.113857 | orchestrator | 2025-09-18 10:41:23 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:23.114395 | orchestrator | 2025-09-18 10:41:23 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:23.114565 | orchestrator | 2025-09-18 10:41:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:26.149161 | orchestrator | 2025-09-18 10:41:26 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:26.149668 | orchestrator | 2025-09-18 10:41:26 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:26.150214 | orchestrator | 2025-09-18 10:41:26 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state STARTED 2025-09-18 10:41:26.150239 | orchestrator | 2025-09-18 10:41:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:29.188356 | orchestrator | 2025-09-18 10:41:29 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:29.188549 | orchestrator | 2025-09-18 10:41:29 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:29.189962 | orchestrator | 2025-09-18 10:41:29 | INFO  | Task 249f3452-832e-4faf-9b3b-2a045342bf2b is in state SUCCESS 2025-09-18 10:41:29.191882 | orchestrator | 2025-09-18 10:41:29.191926 | orchestrator | 2025-09-18 10:41:29.191939 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:41:29.191952 | orchestrator | 2025-09-18 10:41:29.191964 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:41:29.191975 | orchestrator | Thursday 18 September 2025 10:39:14 +0000 (0:00:00.514) 0:00:00.514 **** 2025-09-18 10:41:29.191987 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.192000 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.192011 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.192022 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:41:29.192033 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:41:29.192043 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:41:29.192054 | orchestrator | 2025-09-18 10:41:29.192065 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:41:29.192076 | orchestrator | Thursday 18 September 2025 10:39:16 +0000 (0:00:01.557) 0:00:02.071 **** 2025-09-18 10:41:29.192087 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-09-18 10:41:29.192099 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-09-18 10:41:29.192110 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-09-18 10:41:29.192120 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-09-18 10:41:29.192131 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-09-18 10:41:29.192142 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-09-18 10:41:29.192153 | orchestrator | 2025-09-18 10:41:29.192164 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-09-18 10:41:29.192175 | orchestrator | 2025-09-18 10:41:29.192186 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-09-18 10:41:29.192197 | orchestrator | Thursday 18 September 2025 10:39:17 +0000 (0:00:01.045) 0:00:03.117 **** 2025-09-18 10:41:29.192613 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:41:29.192640 | orchestrator | 2025-09-18 10:41:29.192651 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-09-18 10:41:29.192663 | orchestrator | Thursday 18 September 2025 10:39:18 +0000 (0:00:00.941) 0:00:04.058 **** 2025-09-18 10:41:29.192678 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192735 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192747 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192758 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192770 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192781 | orchestrator | 2025-09-18 10:41:29.192805 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-09-18 10:41:29.192817 | orchestrator | Thursday 18 September 2025 10:39:20 +0000 (0:00:01.631) 0:00:05.690 **** 2025-09-18 10:41:29.192828 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192840 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192860 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192872 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192891 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192903 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192914 | orchestrator | 2025-09-18 10:41:29.192925 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-09-18 10:41:29.192936 | orchestrator | Thursday 18 September 2025 10:39:22 +0000 (0:00:02.136) 0:00:07.826 **** 2025-09-18 10:41:29.192947 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.192993 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193004 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193021 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193039 | orchestrator | 2025-09-18 10:41:29.193051 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-09-18 10:41:29.193062 | orchestrator | Thursday 18 September 2025 10:39:24 +0000 (0:00:01.925) 0:00:09.752 **** 2025-09-18 10:41:29.193078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193097 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193114 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193132 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193152 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193171 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193189 | orchestrator | 2025-09-18 10:41:29.193219 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-09-18 10:41:29.193239 | orchestrator | Thursday 18 September 2025 10:39:26 +0000 (0:00:01.956) 0:00:11.709 **** 2025-09-18 10:41:29.193255 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193268 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193296 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193310 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193323 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193335 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-controller:2024.2', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.193347 | orchestrator | 2025-09-18 10:41:29.193360 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-09-18 10:41:29.193372 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:01.870) 0:00:13.579 **** 2025-09-18 10:41:29.193385 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.193397 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.193409 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.193421 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:41:29.193433 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:41:29.193471 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:41:29.193484 | orchestrator | 2025-09-18 10:41:29.193496 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-09-18 10:41:29.193509 | orchestrator | Thursday 18 September 2025 10:39:30 +0000 (0:00:02.575) 0:00:16.155 **** 2025-09-18 10:41:29.193521 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-09-18 10:41:29.193534 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-09-18 10:41:29.193547 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-09-18 10:41:29.193558 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-09-18 10:41:29.193569 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-09-18 10:41:29.193580 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-09-18 10:41:29.193591 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193602 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193621 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193632 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193653 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193664 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-09-18 10:41:29.193675 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193689 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193700 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193711 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193722 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193733 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-09-18 10:41:29.193744 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193762 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193774 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193785 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193807 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-09-18 10:41:29.193818 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193829 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193840 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193851 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193862 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193874 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-09-18 10:41:29.193885 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193896 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193907 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193930 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193941 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-09-18 10:41:29.193951 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 10:41:29.193963 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 10:41:29.193974 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 10:41:29.193994 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 10:41:29.194012 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-09-18 10:41:29.194102 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-09-18 10:41:29.194123 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-09-18 10:41:29.194145 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-09-18 10:41:29.194165 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-09-18 10:41:29.194177 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-09-18 10:41:29.194189 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-09-18 10:41:29.194200 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-09-18 10:41:29.194210 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 10:41:29.194222 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 10:41:29.194233 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 10:41:29.194244 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 10:41:29.194255 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-09-18 10:41:29.194266 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-09-18 10:41:29.194277 | orchestrator | 2025-09-18 10:41:29.194288 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194306 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:17.713) 0:00:33.869 **** 2025-09-18 10:41:29.194317 | orchestrator | 2025-09-18 10:41:29.194328 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194339 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.199) 0:00:34.068 **** 2025-09-18 10:41:29.194350 | orchestrator | 2025-09-18 10:41:29.194361 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194372 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.072) 0:00:34.141 **** 2025-09-18 10:41:29.194383 | orchestrator | 2025-09-18 10:41:29.194394 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194405 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.062) 0:00:34.203 **** 2025-09-18 10:41:29.194416 | orchestrator | 2025-09-18 10:41:29.194427 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194438 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.059) 0:00:34.263 **** 2025-09-18 10:41:29.194470 | orchestrator | 2025-09-18 10:41:29.194481 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-09-18 10:41:29.194491 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.079) 0:00:34.342 **** 2025-09-18 10:41:29.194503 | orchestrator | 2025-09-18 10:41:29.194514 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-09-18 10:41:29.194524 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.087) 0:00:34.429 **** 2025-09-18 10:41:29.194545 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:41:29.194557 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.194568 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.194579 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:41:29.194590 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.194601 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:41:29.194612 | orchestrator | 2025-09-18 10:41:29.194623 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-09-18 10:41:29.194634 | orchestrator | Thursday 18 September 2025 10:39:50 +0000 (0:00:01.668) 0:00:36.098 **** 2025-09-18 10:41:29.194645 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.194656 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.194667 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.194678 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:41:29.194689 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:41:29.194700 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:41:29.194711 | orchestrator | 2025-09-18 10:41:29.194721 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-09-18 10:41:29.194732 | orchestrator | 2025-09-18 10:41:29.194744 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 10:41:29.194754 | orchestrator | Thursday 18 September 2025 10:40:16 +0000 (0:00:26.043) 0:01:02.141 **** 2025-09-18 10:41:29.194766 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:41:29.194777 | orchestrator | 2025-09-18 10:41:29.194788 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 10:41:29.194799 | orchestrator | Thursday 18 September 2025 10:40:17 +0000 (0:00:00.793) 0:01:02.934 **** 2025-09-18 10:41:29.194810 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:41:29.194821 | orchestrator | 2025-09-18 10:41:29.194832 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-09-18 10:41:29.194843 | orchestrator | Thursday 18 September 2025 10:40:17 +0000 (0:00:00.627) 0:01:03.561 **** 2025-09-18 10:41:29.194854 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.194865 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.194876 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.194887 | orchestrator | 2025-09-18 10:41:29.194899 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-09-18 10:41:29.194917 | orchestrator | Thursday 18 September 2025 10:40:19 +0000 (0:00:01.093) 0:01:04.655 **** 2025-09-18 10:41:29.194937 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.194956 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.194977 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.195006 | orchestrator | 2025-09-18 10:41:29.195028 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-09-18 10:41:29.195049 | orchestrator | Thursday 18 September 2025 10:40:19 +0000 (0:00:00.426) 0:01:05.082 **** 2025-09-18 10:41:29.195061 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.195072 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.195086 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.195103 | orchestrator | 2025-09-18 10:41:29.195122 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-09-18 10:41:29.195139 | orchestrator | Thursday 18 September 2025 10:40:19 +0000 (0:00:00.418) 0:01:05.500 **** 2025-09-18 10:41:29.195156 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.195172 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.195190 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.195207 | orchestrator | 2025-09-18 10:41:29.195226 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-09-18 10:41:29.195245 | orchestrator | Thursday 18 September 2025 10:40:20 +0000 (0:00:00.458) 0:01:05.959 **** 2025-09-18 10:41:29.195262 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.195278 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.195299 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.195310 | orchestrator | 2025-09-18 10:41:29.195321 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-09-18 10:41:29.195332 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:00.661) 0:01:06.620 **** 2025-09-18 10:41:29.195343 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195354 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195365 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195376 | orchestrator | 2025-09-18 10:41:29.195387 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-09-18 10:41:29.195398 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:00.342) 0:01:06.963 **** 2025-09-18 10:41:29.195409 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195420 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195430 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195500 | orchestrator | 2025-09-18 10:41:29.195528 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-09-18 10:41:29.195540 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:00.283) 0:01:07.246 **** 2025-09-18 10:41:29.195552 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195564 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195575 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195587 | orchestrator | 2025-09-18 10:41:29.195598 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-09-18 10:41:29.195609 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:00.265) 0:01:07.512 **** 2025-09-18 10:41:29.195622 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195641 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195659 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195676 | orchestrator | 2025-09-18 10:41:29.195692 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-09-18 10:41:29.195709 | orchestrator | Thursday 18 September 2025 10:40:22 +0000 (0:00:00.435) 0:01:07.948 **** 2025-09-18 10:41:29.195725 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195741 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195756 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195766 | orchestrator | 2025-09-18 10:41:29.195776 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-09-18 10:41:29.195786 | orchestrator | Thursday 18 September 2025 10:40:22 +0000 (0:00:00.271) 0:01:08.220 **** 2025-09-18 10:41:29.195796 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195805 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195815 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195824 | orchestrator | 2025-09-18 10:41:29.195834 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-09-18 10:41:29.195844 | orchestrator | Thursday 18 September 2025 10:40:22 +0000 (0:00:00.267) 0:01:08.487 **** 2025-09-18 10:41:29.195854 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195863 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195873 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195882 | orchestrator | 2025-09-18 10:41:29.195892 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-09-18 10:41:29.195901 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:00.260) 0:01:08.748 **** 2025-09-18 10:41:29.195911 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195921 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195931 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.195940 | orchestrator | 2025-09-18 10:41:29.195950 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-09-18 10:41:29.195959 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:00.247) 0:01:08.995 **** 2025-09-18 10:41:29.195969 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.195979 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.195988 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196007 | orchestrator | 2025-09-18 10:41:29.196017 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-09-18 10:41:29.196027 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:00.401) 0:01:09.397 **** 2025-09-18 10:41:29.196036 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196046 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196056 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196065 | orchestrator | 2025-09-18 10:41:29.196075 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-09-18 10:41:29.196084 | orchestrator | Thursday 18 September 2025 10:40:24 +0000 (0:00:00.286) 0:01:09.683 **** 2025-09-18 10:41:29.196094 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196104 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196114 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196123 | orchestrator | 2025-09-18 10:41:29.196133 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-09-18 10:41:29.196143 | orchestrator | Thursday 18 September 2025 10:40:24 +0000 (0:00:00.284) 0:01:09.967 **** 2025-09-18 10:41:29.196152 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196162 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196180 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196190 | orchestrator | 2025-09-18 10:41:29.196200 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-09-18 10:41:29.196210 | orchestrator | Thursday 18 September 2025 10:40:24 +0000 (0:00:00.257) 0:01:10.225 **** 2025-09-18 10:41:29.196220 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:41:29.196230 | orchestrator | 2025-09-18 10:41:29.196239 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-09-18 10:41:29.196249 | orchestrator | Thursday 18 September 2025 10:40:25 +0000 (0:00:00.666) 0:01:10.891 **** 2025-09-18 10:41:29.196258 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.196268 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.196278 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.196287 | orchestrator | 2025-09-18 10:41:29.196297 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-09-18 10:41:29.196307 | orchestrator | Thursday 18 September 2025 10:40:25 +0000 (0:00:00.385) 0:01:11.277 **** 2025-09-18 10:41:29.196317 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.196326 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.196336 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.196346 | orchestrator | 2025-09-18 10:41:29.196355 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-09-18 10:41:29.196365 | orchestrator | Thursday 18 September 2025 10:40:26 +0000 (0:00:00.410) 0:01:11.688 **** 2025-09-18 10:41:29.196375 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196384 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196399 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196415 | orchestrator | 2025-09-18 10:41:29.196430 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-09-18 10:41:29.196465 | orchestrator | Thursday 18 September 2025 10:40:26 +0000 (0:00:00.418) 0:01:12.106 **** 2025-09-18 10:41:29.196481 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196498 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196522 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196539 | orchestrator | 2025-09-18 10:41:29.196550 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-09-18 10:41:29.196559 | orchestrator | Thursday 18 September 2025 10:40:26 +0000 (0:00:00.301) 0:01:12.408 **** 2025-09-18 10:41:29.196569 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196579 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196588 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196598 | orchestrator | 2025-09-18 10:41:29.196608 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-09-18 10:41:29.196627 | orchestrator | Thursday 18 September 2025 10:40:27 +0000 (0:00:00.279) 0:01:12.688 **** 2025-09-18 10:41:29.196637 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196646 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196656 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196666 | orchestrator | 2025-09-18 10:41:29.196676 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-09-18 10:41:29.196685 | orchestrator | Thursday 18 September 2025 10:40:27 +0000 (0:00:00.467) 0:01:13.155 **** 2025-09-18 10:41:29.196695 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196705 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196715 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196724 | orchestrator | 2025-09-18 10:41:29.196734 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-09-18 10:41:29.196743 | orchestrator | Thursday 18 September 2025 10:40:28 +0000 (0:00:00.483) 0:01:13.639 **** 2025-09-18 10:41:29.196753 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.196763 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.196772 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.196782 | orchestrator | 2025-09-18 10:41:29.196792 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-18 10:41:29.196801 | orchestrator | Thursday 18 September 2025 10:40:28 +0000 (0:00:00.336) 0:01:13.975 **** 2025-09-18 10:41:29.196813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196826 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196836 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196878 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196888 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196931 | orchestrator | 2025-09-18 10:41:29.196941 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-18 10:41:29.196951 | orchestrator | Thursday 18 September 2025 10:40:29 +0000 (0:00:01.528) 0:01:15.504 **** 2025-09-18 10:41:29.196961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196981 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.196991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197006 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197016 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197065 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197083 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197108 | orchestrator | 2025-09-18 10:41:29.197118 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-18 10:41:29.197128 | orchestrator | Thursday 18 September 2025 10:40:33 +0000 (0:00:03.863) 0:01:19.368 **** 2025-09-18 10:41:29.197138 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197149 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197169 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197196 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197223 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.197243 | orchestrator | 2025-09-18 10:41:29.197258 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.197268 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:02.322) 0:01:21.691 **** 2025-09-18 10:41:29.197278 | orchestrator | 2025-09-18 10:41:29.197288 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.197297 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:00.071) 0:01:21.762 **** 2025-09-18 10:41:29.197307 | orchestrator | 2025-09-18 10:41:29.197316 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.197326 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:00.058) 0:01:21.821 **** 2025-09-18 10:41:29.197336 | orchestrator | 2025-09-18 10:41:29.197345 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-18 10:41:29.197355 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:00.066) 0:01:21.887 **** 2025-09-18 10:41:29.197364 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.197374 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.197383 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.197393 | orchestrator | 2025-09-18 10:41:29.197402 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-18 10:41:29.197412 | orchestrator | Thursday 18 September 2025 10:40:39 +0000 (0:00:02.915) 0:01:24.803 **** 2025-09-18 10:41:29.197421 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.197431 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.197464 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.197474 | orchestrator | 2025-09-18 10:41:29.197484 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-18 10:41:29.197494 | orchestrator | Thursday 18 September 2025 10:40:45 +0000 (0:00:06.592) 0:01:31.396 **** 2025-09-18 10:41:29.197504 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.197513 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.197523 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.197533 | orchestrator | 2025-09-18 10:41:29.197542 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-18 10:41:29.197552 | orchestrator | Thursday 18 September 2025 10:40:48 +0000 (0:00:02.333) 0:01:33.730 **** 2025-09-18 10:41:29.197562 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.197571 | orchestrator | 2025-09-18 10:41:29.197581 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-18 10:41:29.197591 | orchestrator | Thursday 18 September 2025 10:40:48 +0000 (0:00:00.448) 0:01:34.179 **** 2025-09-18 10:41:29.197601 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.197610 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.197620 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.197629 | orchestrator | 2025-09-18 10:41:29.197639 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-18 10:41:29.197655 | orchestrator | Thursday 18 September 2025 10:40:49 +0000 (0:00:00.977) 0:01:35.157 **** 2025-09-18 10:41:29.197665 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.197675 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.197684 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.197694 | orchestrator | 2025-09-18 10:41:29.197704 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-18 10:41:29.197713 | orchestrator | Thursday 18 September 2025 10:40:50 +0000 (0:00:00.710) 0:01:35.867 **** 2025-09-18 10:41:29.197723 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.197733 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.197751 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.197769 | orchestrator | 2025-09-18 10:41:29.197787 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-18 10:41:29.197804 | orchestrator | Thursday 18 September 2025 10:40:51 +0000 (0:00:01.015) 0:01:36.882 **** 2025-09-18 10:41:29.197823 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.197842 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.197859 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.197873 | orchestrator | 2025-09-18 10:41:29.197883 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-18 10:41:29.197893 | orchestrator | Thursday 18 September 2025 10:40:51 +0000 (0:00:00.583) 0:01:37.466 **** 2025-09-18 10:41:29.197903 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.197912 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.197929 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.197939 | orchestrator | 2025-09-18 10:41:29.197949 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-18 10:41:29.197959 | orchestrator | Thursday 18 September 2025 10:40:53 +0000 (0:00:01.202) 0:01:38.669 **** 2025-09-18 10:41:29.197968 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.197978 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.197988 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.197997 | orchestrator | 2025-09-18 10:41:29.198007 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-09-18 10:41:29.198085 | orchestrator | Thursday 18 September 2025 10:40:53 +0000 (0:00:00.690) 0:01:39.360 **** 2025-09-18 10:41:29.198098 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.198108 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.198118 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.198127 | orchestrator | 2025-09-18 10:41:29.198137 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-09-18 10:41:29.198147 | orchestrator | Thursday 18 September 2025 10:40:54 +0000 (0:00:00.288) 0:01:39.649 **** 2025-09-18 10:41:29.198157 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198174 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198184 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198195 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198216 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198226 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198237 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198247 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198272 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198282 | orchestrator | 2025-09-18 10:41:29.198292 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-09-18 10:41:29.198301 | orchestrator | Thursday 18 September 2025 10:40:55 +0000 (0:00:01.405) 0:01:41.054 **** 2025-09-18 10:41:29.198312 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198322 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198332 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198348 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198376 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198386 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198396 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198416 | orchestrator | 2025-09-18 10:41:29.198426 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-09-18 10:41:29.198436 | orchestrator | Thursday 18 September 2025 10:40:59 +0000 (0:00:04.141) 0:01:45.195 **** 2025-09-18 10:41:29.198505 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198516 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198526 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-northd:2024.2', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198534 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198557 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-nb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198574 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198583 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198591 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/ovn-sb-db-server:2024.2', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:41:29.198599 | orchestrator | 2025-09-18 10:41:29.198607 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.198616 | orchestrator | Thursday 18 September 2025 10:41:02 +0000 (0:00:03.387) 0:01:48.583 **** 2025-09-18 10:41:29.198624 | orchestrator | 2025-09-18 10:41:29.198632 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.198640 | orchestrator | Thursday 18 September 2025 10:41:03 +0000 (0:00:00.078) 0:01:48.661 **** 2025-09-18 10:41:29.198648 | orchestrator | 2025-09-18 10:41:29.198656 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-09-18 10:41:29.198663 | orchestrator | Thursday 18 September 2025 10:41:03 +0000 (0:00:00.071) 0:01:48.733 **** 2025-09-18 10:41:29.198671 | orchestrator | 2025-09-18 10:41:29.198679 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-09-18 10:41:29.198687 | orchestrator | Thursday 18 September 2025 10:41:03 +0000 (0:00:00.068) 0:01:48.802 **** 2025-09-18 10:41:29.198696 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.198704 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.198712 | orchestrator | 2025-09-18 10:41:29.198724 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-09-18 10:41:29.198733 | orchestrator | Thursday 18 September 2025 10:41:09 +0000 (0:00:06.424) 0:01:55.226 **** 2025-09-18 10:41:29.198740 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.198749 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.198757 | orchestrator | 2025-09-18 10:41:29.198764 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-09-18 10:41:29.198772 | orchestrator | Thursday 18 September 2025 10:41:16 +0000 (0:00:06.749) 0:02:01.975 **** 2025-09-18 10:41:29.198780 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:41:29.198788 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:41:29.198802 | orchestrator | 2025-09-18 10:41:29.198810 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-09-18 10:41:29.198818 | orchestrator | Thursday 18 September 2025 10:41:23 +0000 (0:00:06.970) 0:02:08.946 **** 2025-09-18 10:41:29.198826 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:41:29.198834 | orchestrator | 2025-09-18 10:41:29.198842 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-09-18 10:41:29.198850 | orchestrator | Thursday 18 September 2025 10:41:23 +0000 (0:00:00.141) 0:02:09.087 **** 2025-09-18 10:41:29.198858 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.198866 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.198874 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.198882 | orchestrator | 2025-09-18 10:41:29.198890 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-09-18 10:41:29.198898 | orchestrator | Thursday 18 September 2025 10:41:24 +0000 (0:00:00.849) 0:02:09.937 **** 2025-09-18 10:41:29.198905 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.198913 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.198921 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.198929 | orchestrator | 2025-09-18 10:41:29.198937 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-09-18 10:41:29.198954 | orchestrator | Thursday 18 September 2025 10:41:25 +0000 (0:00:00.723) 0:02:10.660 **** 2025-09-18 10:41:29.198967 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.198980 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.198993 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.199006 | orchestrator | 2025-09-18 10:41:29.199020 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-09-18 10:41:29.199034 | orchestrator | Thursday 18 September 2025 10:41:25 +0000 (0:00:00.878) 0:02:11.539 **** 2025-09-18 10:41:29.199047 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:41:29.199060 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:41:29.199068 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:41:29.199077 | orchestrator | 2025-09-18 10:41:29.199090 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-09-18 10:41:29.199102 | orchestrator | Thursday 18 September 2025 10:41:26 +0000 (0:00:00.714) 0:02:12.253 **** 2025-09-18 10:41:29.199114 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.199126 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.199139 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.199151 | orchestrator | 2025-09-18 10:41:29.199164 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-09-18 10:41:29.199177 | orchestrator | Thursday 18 September 2025 10:41:27 +0000 (0:00:00.850) 0:02:13.104 **** 2025-09-18 10:41:29.199189 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:41:29.199203 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:41:29.199216 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:41:29.199229 | orchestrator | 2025-09-18 10:41:29.199239 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:41:29.199248 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 10:41:29.199257 | orchestrator | testbed-node-1 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-18 10:41:29.199265 | orchestrator | testbed-node-2 : ok=43  changed=19  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-09-18 10:41:29.199273 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:41:29.199282 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:41:29.199289 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:41:29.199306 | orchestrator | 2025-09-18 10:41:29.199314 | orchestrator | 2025-09-18 10:41:29.199322 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:41:29.199330 | orchestrator | Thursday 18 September 2025 10:41:28 +0000 (0:00:00.977) 0:02:14.081 **** 2025-09-18 10:41:29.199338 | orchestrator | =============================================================================== 2025-09-18 10:41:29.199345 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 26.04s 2025-09-18 10:41:29.199353 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 17.71s 2025-09-18 10:41:29.199361 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.34s 2025-09-18 10:41:29.199369 | orchestrator | ovn-db : Restart ovn-nb-db container ------------------------------------ 9.34s 2025-09-18 10:41:29.199377 | orchestrator | ovn-db : Restart ovn-northd container ----------------------------------- 9.30s 2025-09-18 10:41:29.199385 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.14s 2025-09-18 10:41:29.199393 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 3.86s 2025-09-18 10:41:29.199406 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.39s 2025-09-18 10:41:29.199415 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 2.58s 2025-09-18 10:41:29.199423 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.32s 2025-09-18 10:41:29.199430 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 2.14s 2025-09-18 10:41:29.199438 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 1.96s 2025-09-18 10:41:29.199464 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.93s 2025-09-18 10:41:29.199472 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.87s 2025-09-18 10:41:29.199480 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 1.67s 2025-09-18 10:41:29.199488 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.63s 2025-09-18 10:41:29.199496 | orchestrator | Group hosts based on Kolla action --------------------------------------- 1.56s 2025-09-18 10:41:29.199504 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.53s 2025-09-18 10:41:29.199512 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.41s 2025-09-18 10:41:29.199520 | orchestrator | ovn-db : Wait for ovn-nb-db --------------------------------------------- 1.20s 2025-09-18 10:41:29.199528 | orchestrator | 2025-09-18 10:41:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:32.283991 | orchestrator | 2025-09-18 10:41:32 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:32.285141 | orchestrator | 2025-09-18 10:41:32 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:32.285189 | orchestrator | 2025-09-18 10:41:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:35.346556 | orchestrator | 2025-09-18 10:41:35 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:35.347323 | orchestrator | 2025-09-18 10:41:35 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:35.347581 | orchestrator | 2025-09-18 10:41:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:38.387790 | orchestrator | 2025-09-18 10:41:38 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:38.389242 | orchestrator | 2025-09-18 10:41:38 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:38.389412 | orchestrator | 2025-09-18 10:41:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:41.436466 | orchestrator | 2025-09-18 10:41:41 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:41.438473 | orchestrator | 2025-09-18 10:41:41 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:41.438505 | orchestrator | 2025-09-18 10:41:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:44.487969 | orchestrator | 2025-09-18 10:41:44 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:44.488058 | orchestrator | 2025-09-18 10:41:44 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:44.488073 | orchestrator | 2025-09-18 10:41:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:47.535922 | orchestrator | 2025-09-18 10:41:47 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:47.536970 | orchestrator | 2025-09-18 10:41:47 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:47.538312 | orchestrator | 2025-09-18 10:41:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:50.592017 | orchestrator | 2025-09-18 10:41:50 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:50.593664 | orchestrator | 2025-09-18 10:41:50 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:50.593892 | orchestrator | 2025-09-18 10:41:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:53.636661 | orchestrator | 2025-09-18 10:41:53 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:53.638466 | orchestrator | 2025-09-18 10:41:53 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:53.638531 | orchestrator | 2025-09-18 10:41:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:56.671669 | orchestrator | 2025-09-18 10:41:56 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:56.671797 | orchestrator | 2025-09-18 10:41:56 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:56.671816 | orchestrator | 2025-09-18 10:41:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:41:59.722306 | orchestrator | 2025-09-18 10:41:59 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:41:59.722459 | orchestrator | 2025-09-18 10:41:59 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:41:59.722478 | orchestrator | 2025-09-18 10:41:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:02.755056 | orchestrator | 2025-09-18 10:42:02 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:02.756065 | orchestrator | 2025-09-18 10:42:02 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:02.756096 | orchestrator | 2025-09-18 10:42:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:05.807801 | orchestrator | 2025-09-18 10:42:05 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:05.809982 | orchestrator | 2025-09-18 10:42:05 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:05.810069 | orchestrator | 2025-09-18 10:42:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:08.858836 | orchestrator | 2025-09-18 10:42:08 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:08.863798 | orchestrator | 2025-09-18 10:42:08 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:08.863890 | orchestrator | 2025-09-18 10:42:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:11.903183 | orchestrator | 2025-09-18 10:42:11 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:11.903261 | orchestrator | 2025-09-18 10:42:11 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:11.903272 | orchestrator | 2025-09-18 10:42:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:14.953625 | orchestrator | 2025-09-18 10:42:14 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:14.953735 | orchestrator | 2025-09-18 10:42:14 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:14.953760 | orchestrator | 2025-09-18 10:42:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:17.988667 | orchestrator | 2025-09-18 10:42:17 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:17.988764 | orchestrator | 2025-09-18 10:42:17 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:17.988779 | orchestrator | 2025-09-18 10:42:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:21.022211 | orchestrator | 2025-09-18 10:42:21 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:21.024412 | orchestrator | 2025-09-18 10:42:21 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:21.024448 | orchestrator | 2025-09-18 10:42:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:24.057525 | orchestrator | 2025-09-18 10:42:24 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:24.057620 | orchestrator | 2025-09-18 10:42:24 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:24.057758 | orchestrator | 2025-09-18 10:42:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:27.100454 | orchestrator | 2025-09-18 10:42:27 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:27.103533 | orchestrator | 2025-09-18 10:42:27 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:27.103579 | orchestrator | 2025-09-18 10:42:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:30.151011 | orchestrator | 2025-09-18 10:42:30 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:30.152152 | orchestrator | 2025-09-18 10:42:30 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:30.152523 | orchestrator | 2025-09-18 10:42:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:33.195275 | orchestrator | 2025-09-18 10:42:33 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:33.196889 | orchestrator | 2025-09-18 10:42:33 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:33.196922 | orchestrator | 2025-09-18 10:42:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:36.235331 | orchestrator | 2025-09-18 10:42:36 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:36.237076 | orchestrator | 2025-09-18 10:42:36 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:36.237109 | orchestrator | 2025-09-18 10:42:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:39.282779 | orchestrator | 2025-09-18 10:42:39 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:39.284137 | orchestrator | 2025-09-18 10:42:39 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:39.284173 | orchestrator | 2025-09-18 10:42:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:42.339273 | orchestrator | 2025-09-18 10:42:42 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:42.340527 | orchestrator | 2025-09-18 10:42:42 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:42.341009 | orchestrator | 2025-09-18 10:42:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:45.374051 | orchestrator | 2025-09-18 10:42:45 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:45.374433 | orchestrator | 2025-09-18 10:42:45 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:45.375519 | orchestrator | 2025-09-18 10:42:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:48.420646 | orchestrator | 2025-09-18 10:42:48 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:48.422894 | orchestrator | 2025-09-18 10:42:48 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:48.423358 | orchestrator | 2025-09-18 10:42:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:51.455632 | orchestrator | 2025-09-18 10:42:51 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:51.456651 | orchestrator | 2025-09-18 10:42:51 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:51.456861 | orchestrator | 2025-09-18 10:42:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:54.500540 | orchestrator | 2025-09-18 10:42:54 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:54.500641 | orchestrator | 2025-09-18 10:42:54 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:54.500655 | orchestrator | 2025-09-18 10:42:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:42:57.546990 | orchestrator | 2025-09-18 10:42:57 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:42:57.547503 | orchestrator | 2025-09-18 10:42:57 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:42:57.547534 | orchestrator | 2025-09-18 10:42:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:00.598177 | orchestrator | 2025-09-18 10:43:00 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:00.599768 | orchestrator | 2025-09-18 10:43:00 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:00.599796 | orchestrator | 2025-09-18 10:43:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:03.647211 | orchestrator | 2025-09-18 10:43:03 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:03.647986 | orchestrator | 2025-09-18 10:43:03 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:03.648157 | orchestrator | 2025-09-18 10:43:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:06.699015 | orchestrator | 2025-09-18 10:43:06 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:06.701362 | orchestrator | 2025-09-18 10:43:06 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:06.701398 | orchestrator | 2025-09-18 10:43:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:09.745328 | orchestrator | 2025-09-18 10:43:09 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:09.748334 | orchestrator | 2025-09-18 10:43:09 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:09.748386 | orchestrator | 2025-09-18 10:43:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:12.800272 | orchestrator | 2025-09-18 10:43:12 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:12.800559 | orchestrator | 2025-09-18 10:43:12 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:12.800617 | orchestrator | 2025-09-18 10:43:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:15.856707 | orchestrator | 2025-09-18 10:43:15 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:15.858838 | orchestrator | 2025-09-18 10:43:15 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:15.859204 | orchestrator | 2025-09-18 10:43:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:18.909136 | orchestrator | 2025-09-18 10:43:18 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:18.910205 | orchestrator | 2025-09-18 10:43:18 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:18.910231 | orchestrator | 2025-09-18 10:43:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:21.955243 | orchestrator | 2025-09-18 10:43:21 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:21.955746 | orchestrator | 2025-09-18 10:43:21 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:21.955782 | orchestrator | 2025-09-18 10:43:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:24.991407 | orchestrator | 2025-09-18 10:43:24 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:24.993267 | orchestrator | 2025-09-18 10:43:24 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:24.993304 | orchestrator | 2025-09-18 10:43:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:28.037718 | orchestrator | 2025-09-18 10:43:28 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:28.037964 | orchestrator | 2025-09-18 10:43:28 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:28.037986 | orchestrator | 2025-09-18 10:43:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:31.083388 | orchestrator | 2025-09-18 10:43:31 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:31.084275 | orchestrator | 2025-09-18 10:43:31 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:31.087804 | orchestrator | 2025-09-18 10:43:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:34.131098 | orchestrator | 2025-09-18 10:43:34 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:34.131908 | orchestrator | 2025-09-18 10:43:34 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:34.132288 | orchestrator | 2025-09-18 10:43:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:37.185142 | orchestrator | 2025-09-18 10:43:37 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:37.186709 | orchestrator | 2025-09-18 10:43:37 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:37.186800 | orchestrator | 2025-09-18 10:43:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:40.236706 | orchestrator | 2025-09-18 10:43:40 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:40.237520 | orchestrator | 2025-09-18 10:43:40 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:40.237825 | orchestrator | 2025-09-18 10:43:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:43.288398 | orchestrator | 2025-09-18 10:43:43 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:43.289099 | orchestrator | 2025-09-18 10:43:43 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:43.289132 | orchestrator | 2025-09-18 10:43:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:46.342762 | orchestrator | 2025-09-18 10:43:46 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:46.343667 | orchestrator | 2025-09-18 10:43:46 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:46.344458 | orchestrator | 2025-09-18 10:43:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:49.400426 | orchestrator | 2025-09-18 10:43:49 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:49.402318 | orchestrator | 2025-09-18 10:43:49 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:49.402351 | orchestrator | 2025-09-18 10:43:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:52.447432 | orchestrator | 2025-09-18 10:43:52 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:52.448445 | orchestrator | 2025-09-18 10:43:52 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:52.448550 | orchestrator | 2025-09-18 10:43:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:55.490639 | orchestrator | 2025-09-18 10:43:55 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:55.492409 | orchestrator | 2025-09-18 10:43:55 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:55.492442 | orchestrator | 2025-09-18 10:43:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:43:58.536723 | orchestrator | 2025-09-18 10:43:58 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:43:58.537154 | orchestrator | 2025-09-18 10:43:58 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:43:58.537188 | orchestrator | 2025-09-18 10:43:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:01.578460 | orchestrator | 2025-09-18 10:44:01 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:01.579496 | orchestrator | 2025-09-18 10:44:01 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:01.579768 | orchestrator | 2025-09-18 10:44:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:04.616696 | orchestrator | 2025-09-18 10:44:04 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:04.619182 | orchestrator | 2025-09-18 10:44:04 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:04.619261 | orchestrator | 2025-09-18 10:44:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:07.666781 | orchestrator | 2025-09-18 10:44:07 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:07.669051 | orchestrator | 2025-09-18 10:44:07 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:07.669378 | orchestrator | 2025-09-18 10:44:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:10.701123 | orchestrator | 2025-09-18 10:44:10 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:10.702359 | orchestrator | 2025-09-18 10:44:10 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:10.702389 | orchestrator | 2025-09-18 10:44:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:13.735434 | orchestrator | 2025-09-18 10:44:13 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:13.737523 | orchestrator | 2025-09-18 10:44:13 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:13.737605 | orchestrator | 2025-09-18 10:44:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:16.777403 | orchestrator | 2025-09-18 10:44:16 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:16.779587 | orchestrator | 2025-09-18 10:44:16 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:16.779933 | orchestrator | 2025-09-18 10:44:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:19.825915 | orchestrator | 2025-09-18 10:44:19 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:19.828595 | orchestrator | 2025-09-18 10:44:19 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:19.828633 | orchestrator | 2025-09-18 10:44:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:22.866465 | orchestrator | 2025-09-18 10:44:22 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:22.867839 | orchestrator | 2025-09-18 10:44:22 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:22.867871 | orchestrator | 2025-09-18 10:44:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:25.898566 | orchestrator | 2025-09-18 10:44:25 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state STARTED 2025-09-18 10:44:25.899766 | orchestrator | 2025-09-18 10:44:25 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:25.899789 | orchestrator | 2025-09-18 10:44:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:28.948523 | orchestrator | 2025-09-18 10:44:28 | INFO  | Task fe4d05c5-886d-457b-a7fa-0db2f58b352e is in state SUCCESS 2025-09-18 10:44:28.950429 | orchestrator | 2025-09-18 10:44:28.950467 | orchestrator | 2025-09-18 10:44:28.950481 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:44:28.950493 | orchestrator | 2025-09-18 10:44:28.950504 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:44:28.950515 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.541) 0:00:00.541 **** 2025-09-18 10:44:28.950527 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.950539 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.950550 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.950561 | orchestrator | 2025-09-18 10:44:28.950572 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:44:28.950584 | orchestrator | Thursday 18 September 2025 10:38:07 +0000 (0:00:00.510) 0:00:01.052 **** 2025-09-18 10:44:28.950595 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-09-18 10:44:28.950607 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-09-18 10:44:28.950618 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-09-18 10:44:28.950651 | orchestrator | 2025-09-18 10:44:28.950663 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-09-18 10:44:28.950674 | orchestrator | 2025-09-18 10:44:28.950685 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-18 10:44:28.950696 | orchestrator | Thursday 18 September 2025 10:38:07 +0000 (0:00:00.649) 0:00:01.701 **** 2025-09-18 10:44:28.950707 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.950718 | orchestrator | 2025-09-18 10:44:28.951132 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-09-18 10:44:28.951176 | orchestrator | Thursday 18 September 2025 10:38:08 +0000 (0:00:00.585) 0:00:02.287 **** 2025-09-18 10:44:28.951190 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.951203 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.951214 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.951227 | orchestrator | 2025-09-18 10:44:28.951239 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-18 10:44:28.951252 | orchestrator | Thursday 18 September 2025 10:38:09 +0000 (0:00:00.896) 0:00:03.184 **** 2025-09-18 10:44:28.951264 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.951276 | orchestrator | 2025-09-18 10:44:28.951288 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-09-18 10:44:28.951300 | orchestrator | Thursday 18 September 2025 10:38:10 +0000 (0:00:00.941) 0:00:04.125 **** 2025-09-18 10:44:28.951312 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.951325 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.951337 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.951349 | orchestrator | 2025-09-18 10:44:28.951361 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-09-18 10:44:28.951373 | orchestrator | Thursday 18 September 2025 10:38:11 +0000 (0:00:00.791) 0:00:04.917 **** 2025-09-18 10:44:28.951385 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951396 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951407 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951417 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951428 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951439 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 10:44:28.951451 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 10:44:28.951488 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 10:44:28.951510 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 10:44:28.951521 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-09-18 10:44:28.951532 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-09-18 10:44:28.951544 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-09-18 10:44:28.951555 | orchestrator | 2025-09-18 10:44:28.951565 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 10:44:28.951577 | orchestrator | Thursday 18 September 2025 10:38:14 +0000 (0:00:03.558) 0:00:08.475 **** 2025-09-18 10:44:28.951588 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-18 10:44:28.951654 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-18 10:44:28.951668 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-18 10:44:28.951886 | orchestrator | 2025-09-18 10:44:28.951930 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 10:44:28.951953 | orchestrator | Thursday 18 September 2025 10:38:15 +0000 (0:00:00.754) 0:00:09.229 **** 2025-09-18 10:44:28.951965 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-09-18 10:44:28.951977 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-09-18 10:44:28.951988 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-09-18 10:44:28.951999 | orchestrator | 2025-09-18 10:44:28.952010 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 10:44:28.952021 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:02.096) 0:00:11.326 **** 2025-09-18 10:44:28.952058 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-09-18 10:44:28.952070 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.952210 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-09-18 10:44:28.952246 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.952258 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-09-18 10:44:28.952269 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.952280 | orchestrator | 2025-09-18 10:44:28.952317 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-09-18 10:44:28.952487 | orchestrator | Thursday 18 September 2025 10:38:18 +0000 (0:00:01.510) 0:00:12.837 **** 2025-09-18 10:44:28.952524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.952684 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.952699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.952711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.952781 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.952997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.953014 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.953026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.953043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.953055 | orchestrator | 2025-09-18 10:44:28.953090 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-09-18 10:44:28.953101 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:03.065) 0:00:15.902 **** 2025-09-18 10:44:28.953112 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.953124 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.953135 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.953145 | orchestrator | 2025-09-18 10:44:28.953156 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-09-18 10:44:28.953167 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:01.139) 0:00:17.042 **** 2025-09-18 10:44:28.953178 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-09-18 10:44:28.953189 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-09-18 10:44:28.953200 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-09-18 10:44:28.953212 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-09-18 10:44:28.953222 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-09-18 10:44:28.953457 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-09-18 10:44:28.953516 | orchestrator | 2025-09-18 10:44:28.953646 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-09-18 10:44:28.953676 | orchestrator | Thursday 18 September 2025 10:38:25 +0000 (0:00:02.167) 0:00:19.210 **** 2025-09-18 10:44:28.953688 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.953699 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.953709 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.953720 | orchestrator | 2025-09-18 10:44:28.953731 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-09-18 10:44:28.953742 | orchestrator | Thursday 18 September 2025 10:38:26 +0000 (0:00:01.602) 0:00:20.812 **** 2025-09-18 10:44:28.953753 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.953764 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.953775 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.953786 | orchestrator | 2025-09-18 10:44:28.953798 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-09-18 10:44:28.953809 | orchestrator | Thursday 18 September 2025 10:38:29 +0000 (0:00:02.105) 0:00:22.918 **** 2025-09-18 10:44:28.953820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.953842 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.953855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.953872 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.953958 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.953999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.954166 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.954248 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.954260 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.954285 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.954309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.954321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.954355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.954385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.954396 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.954407 | orchestrator | 2025-09-18 10:44:28.954418 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-09-18 10:44:28.954429 | orchestrator | Thursday 18 September 2025 10:38:29 +0000 (0:00:00.926) 0:00:23.844 **** 2025-09-18 10:44:28.954441 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954479 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954491 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.954560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.954572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.954613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.954625 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.954641 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/haproxy-ssh:2024.2', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981', '__omit_place_holder__5e15bcc6909c667b1f7fd1bf7cf386623185b981'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-09-18 10:44:28.954666 | orchestrator | 2025-09-18 10:44:28.954678 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-09-18 10:44:28.954688 | orchestrator | Thursday 18 September 2025 10:38:33 +0000 (0:00:03.584) 0:00:27.429 **** 2025-09-18 10:44:28.954700 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954926 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.954977 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.954989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.955000 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.955011 | orchestrator | 2025-09-18 10:44:28.955022 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-09-18 10:44:28.955033 | orchestrator | Thursday 18 September 2025 10:38:37 +0000 (0:00:03.961) 0:00:31.390 **** 2025-09-18 10:44:28.955044 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 10:44:28.955055 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 10:44:28.955066 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-09-18 10:44:28.955077 | orchestrator | 2025-09-18 10:44:28.955088 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-09-18 10:44:28.955099 | orchestrator | Thursday 18 September 2025 10:38:41 +0000 (0:00:03.509) 0:00:34.900 **** 2025-09-18 10:44:28.955265 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 10:44:28.955277 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 10:44:28.955298 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-09-18 10:44:28.955310 | orchestrator | 2025-09-18 10:44:28.958113 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-09-18 10:44:28.958140 | orchestrator | Thursday 18 September 2025 10:38:44 +0000 (0:00:03.886) 0:00:38.787 **** 2025-09-18 10:44:28.958151 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.958161 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.958181 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.958191 | orchestrator | 2025-09-18 10:44:28.958200 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-09-18 10:44:28.958210 | orchestrator | Thursday 18 September 2025 10:38:45 +0000 (0:00:00.912) 0:00:39.700 **** 2025-09-18 10:44:28.958220 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 10:44:28.958230 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 10:44:28.958240 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-09-18 10:44:28.958250 | orchestrator | 2025-09-18 10:44:28.958260 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-09-18 10:44:28.958270 | orchestrator | Thursday 18 September 2025 10:38:49 +0000 (0:00:03.613) 0:00:43.313 **** 2025-09-18 10:44:28.958279 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 10:44:28.958296 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 10:44:28.958307 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-09-18 10:44:28.958316 | orchestrator | 2025-09-18 10:44:28.958326 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-09-18 10:44:28.958336 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:03.530) 0:00:46.844 **** 2025-09-18 10:44:28.958346 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-09-18 10:44:28.958356 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-09-18 10:44:28.958365 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-09-18 10:44:28.958375 | orchestrator | 2025-09-18 10:44:28.958385 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-09-18 10:44:28.958394 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:02.180) 0:00:49.024 **** 2025-09-18 10:44:28.958404 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-09-18 10:44:28.958414 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-09-18 10:44:28.958424 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-09-18 10:44:28.958434 | orchestrator | 2025-09-18 10:44:28.958444 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-09-18 10:44:28.958453 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:01.648) 0:00:50.673 **** 2025-09-18 10:44:28.958463 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.958473 | orchestrator | 2025-09-18 10:44:28.958483 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-09-18 10:44:28.958492 | orchestrator | Thursday 18 September 2025 10:38:57 +0000 (0:00:00.817) 0:00:51.490 **** 2025-09-18 10:44:28.958503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958514 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958540 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958551 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958578 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.958589 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.958601 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.958619 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.958630 | orchestrator | 2025-09-18 10:44:28.958641 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-09-18 10:44:28.958652 | orchestrator | Thursday 18 September 2025 10:39:01 +0000 (0:00:04.317) 0:00:55.808 **** 2025-09-18 10:44:28.958671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.958682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.958699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.958710 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.958722 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.958734 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.958751 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.958762 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.958774 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.958792 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.958808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.958820 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.958831 | orchestrator | 2025-09-18 10:44:28.958842 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-09-18 10:44:28.958853 | orchestrator | Thursday 18 September 2025 10:39:03 +0000 (0:00:01.553) 0:00:57.362 **** 2025-09-18 10:44:28.958865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.958877 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.958895 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.958906 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.958980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.958999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959022 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.959038 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959050 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959080 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.959091 | orchestrator | 2025-09-18 10:44:28.959102 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-18 10:44:28.959113 | orchestrator | Thursday 18 September 2025 10:39:04 +0000 (0:00:00.736) 0:00:58.098 **** 2025-09-18 10:44:28.959125 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959155 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959166 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.959182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959227 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.959239 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959250 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959279 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.959290 | orchestrator | 2025-09-18 10:44:28.959302 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-18 10:44:28.959313 | orchestrator | Thursday 18 September 2025 10:39:05 +0000 (0:00:00.853) 0:00:58.951 **** 2025-09-18 10:44:28.959324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959340 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959352 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959382 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959393 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.959405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959416 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.959433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959445 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959462 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959480 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.959491 | orchestrator | 2025-09-18 10:44:28.959503 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-18 10:44:28.959514 | orchestrator | Thursday 18 September 2025 10:39:05 +0000 (0:00:00.556) 0:00:59.508 **** 2025-09-18 10:44:28.959525 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959537 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959560 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.959577 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959588 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959622 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.959634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959645 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959657 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959669 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.959680 | orchestrator | 2025-09-18 10:44:28.959691 | orchestrator | TASK [service-cert-copy : proxysql | Copying over extra CA certificates] ******* 2025-09-18 10:44:28.959702 | orchestrator | Thursday 18 September 2025 10:39:06 +0000 (0:00:01.070) 0:01:00.579 **** 2025-09-18 10:44:28.959713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959731 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959760 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.959777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959788 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959823 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959841 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959852 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.959869 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.959881 | orchestrator | 2025-09-18 10:44:28.959892 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS certificate] *** 2025-09-18 10:44:28.959903 | orchestrator | Thursday 18 September 2025 10:39:07 +0000 (0:00:00.810) 0:01:01.390 **** 2025-09-18 10:44:28.959934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959947 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.959958 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.959970 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.959981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.959993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.960013 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.960030 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.960042 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.960058 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.960070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.960081 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.960092 | orchestrator | 2025-09-18 10:44:28.960103 | orchestrator | TASK [service-cert-copy : proxysql | Copying over backend internal TLS key] **** 2025-09-18 10:44:28.960114 | orchestrator | Thursday 18 September 2025 10:39:08 +0000 (0:00:00.562) 0:01:01.952 **** 2025-09-18 10:44:28.960126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.960137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.960149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.960166 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.960184 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.960200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.960212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.960223 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.960235 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-09-18 10:44:28.960246 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-09-18 10:44:28.960257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-09-18 10:44:28.960268 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.960280 | orchestrator | 2025-09-18 10:44:28.960291 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-09-18 10:44:28.960311 | orchestrator | Thursday 18 September 2025 10:39:08 +0000 (0:00:00.810) 0:01:02.762 **** 2025-09-18 10:44:28.960322 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 10:44:28.960334 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 10:44:28.960364 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-09-18 10:44:28.960376 | orchestrator | 2025-09-18 10:44:28.960388 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-09-18 10:44:28.960399 | orchestrator | Thursday 18 September 2025 10:39:11 +0000 (0:00:02.664) 0:01:05.427 **** 2025-09-18 10:44:28.960409 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 10:44:28.960421 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 10:44:28.960432 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-09-18 10:44:28.960443 | orchestrator | 2025-09-18 10:44:28.960454 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-09-18 10:44:28.960480 | orchestrator | Thursday 18 September 2025 10:39:13 +0000 (0:00:02.242) 0:01:07.669 **** 2025-09-18 10:44:28.960492 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 10:44:28.960503 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 10:44:28.960514 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.960525 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 10:44:28.960541 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 10:44:28.960552 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 10:44:28.960563 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.960574 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 10:44:28.960585 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.960596 | orchestrator | 2025-09-18 10:44:28.960607 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-09-18 10:44:28.960618 | orchestrator | Thursday 18 September 2025 10:39:15 +0000 (0:00:01.798) 0:01:09.468 **** 2025-09-18 10:44:28.960629 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960641 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/haproxy:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960677 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960689 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960705 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/proxysql:2024.2', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-09-18 10:44:28.960717 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.960729 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.960740 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/keepalived:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-09-18 10:44:28.960758 | orchestrator | 2025-09-18 10:44:28.960769 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-09-18 10:44:28.960780 | orchestrator | Thursday 18 September 2025 10:39:19 +0000 (0:00:03.504) 0:01:12.972 **** 2025-09-18 10:44:28.960791 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.960802 | orchestrator | 2025-09-18 10:44:28.960813 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-09-18 10:44:28.960824 | orchestrator | Thursday 18 September 2025 10:39:19 +0000 (0:00:00.791) 0:01:13.763 **** 2025-09-18 10:44:28.960836 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 10:44:28.960855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.960867 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.960878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.960938 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 10:44:28.960961 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.960973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961041 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-09-18 10:44:28.961057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.961068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961086 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961097 | orchestrator | 2025-09-18 10:44:28.961108 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-09-18 10:44:28.961119 | orchestrator | Thursday 18 September 2025 10:39:26 +0000 (0:00:06.177) 0:01:19.941 **** 2025-09-18 10:44:28.961130 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 10:44:28.961148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.961160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961176 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961188 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.961200 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 10:44:28.961217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.961228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961251 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.961269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-api:2024.2', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-09-18 10:44:28.961285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-evaluator:2024.2', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.961297 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-listener:2024.2', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961315 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/aodh-notifier:2024.2', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961326 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.961337 | orchestrator | 2025-09-18 10:44:28.961348 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-09-18 10:44:28.961359 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:01.097) 0:01:21.039 **** 2025-09-18 10:44:28.961370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961394 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.961405 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961427 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.961438 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-09-18 10:44:28.961460 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.961471 | orchestrator | 2025-09-18 10:44:28.961487 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-09-18 10:44:28.961498 | orchestrator | Thursday 18 September 2025 10:39:28 +0000 (0:00:01.605) 0:01:22.644 **** 2025-09-18 10:44:28.961509 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.961520 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.961531 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.961542 | orchestrator | 2025-09-18 10:44:28.961552 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-09-18 10:44:28.961563 | orchestrator | Thursday 18 September 2025 10:39:30 +0000 (0:00:01.299) 0:01:23.943 **** 2025-09-18 10:44:28.961574 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.961585 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.961595 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.961606 | orchestrator | 2025-09-18 10:44:28.961617 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-09-18 10:44:28.961628 | orchestrator | Thursday 18 September 2025 10:39:32 +0000 (0:00:02.718) 0:01:26.662 **** 2025-09-18 10:44:28.961639 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.961649 | orchestrator | 2025-09-18 10:44:28.961666 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-09-18 10:44:28.961677 | orchestrator | Thursday 18 September 2025 10:39:33 +0000 (0:00:00.766) 0:01:27.429 **** 2025-09-18 10:44:28.961693 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.961706 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961718 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.961730 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961773 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961785 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.961797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961808 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961819 | orchestrator | 2025-09-18 10:44:28.961830 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-09-18 10:44:28.961841 | orchestrator | Thursday 18 September 2025 10:39:36 +0000 (0:00:02.909) 0:01:30.339 **** 2025-09-18 10:44:28.961859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.961876 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961892 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961904 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.961956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.961969 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961981 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.961992 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.962067 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.962078 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.962088 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962098 | orchestrator | 2025-09-18 10:44:28.962120 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-09-18 10:44:28.962131 | orchestrator | Thursday 18 September 2025 10:39:37 +0000 (0:00:00.541) 0:01:30.881 **** 2025-09-18 10:44:28.962141 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962161 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962182 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962191 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962201 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962211 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-09-18 10:44:28.962221 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962231 | orchestrator | 2025-09-18 10:44:28.962240 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-09-18 10:44:28.962250 | orchestrator | Thursday 18 September 2025 10:39:37 +0000 (0:00:00.847) 0:01:31.728 **** 2025-09-18 10:44:28.962266 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.962276 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.962286 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.962295 | orchestrator | 2025-09-18 10:44:28.962305 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-09-18 10:44:28.962315 | orchestrator | Thursday 18 September 2025 10:39:39 +0000 (0:00:01.228) 0:01:32.957 **** 2025-09-18 10:44:28.962325 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.962335 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.962344 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.962354 | orchestrator | 2025-09-18 10:44:28.962377 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-09-18 10:44:28.962387 | orchestrator | Thursday 18 September 2025 10:39:40 +0000 (0:00:01.882) 0:01:34.839 **** 2025-09-18 10:44:28.962397 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962407 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962417 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962426 | orchestrator | 2025-09-18 10:44:28.962436 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-09-18 10:44:28.962445 | orchestrator | Thursday 18 September 2025 10:39:41 +0000 (0:00:00.296) 0:01:35.136 **** 2025-09-18 10:44:28.962455 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.962464 | orchestrator | 2025-09-18 10:44:28.962474 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-09-18 10:44:28.962484 | orchestrator | Thursday 18 September 2025 10:39:41 +0000 (0:00:00.633) 0:01:35.770 **** 2025-09-18 10:44:28.962498 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 10:44:28.962526 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 10:44:28.962537 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-09-18 10:44:28.962554 | orchestrator | 2025-09-18 10:44:28.962563 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-09-18 10:44:28.962573 | orchestrator | Thursday 18 September 2025 10:39:44 +0000 (0:00:02.783) 0:01:38.554 **** 2025-09-18 10:44:28.962589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 10:44:28.962600 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962610 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 10:44:28.962620 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962634 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-09-18 10:44:28.962645 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962654 | orchestrator | 2025-09-18 10:44:28.962664 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-09-18 10:44:28.962674 | orchestrator | Thursday 18 September 2025 10:39:45 +0000 (0:00:01.218) 0:01:39.772 **** 2025-09-18 10:44:28.962684 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962695 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962711 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962722 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962732 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962742 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-09-18 10:44:28.962778 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962788 | orchestrator | 2025-09-18 10:44:28.962798 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-09-18 10:44:28.962807 | orchestrator | Thursday 18 September 2025 10:39:47 +0000 (0:00:01.545) 0:01:41.317 **** 2025-09-18 10:44:28.962817 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962827 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962836 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962846 | orchestrator | 2025-09-18 10:44:28.962856 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-09-18 10:44:28.962865 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:00.562) 0:01:41.880 **** 2025-09-18 10:44:28.962875 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.962885 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.962899 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.962909 | orchestrator | 2025-09-18 10:44:28.962936 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-09-18 10:44:28.962945 | orchestrator | Thursday 18 September 2025 10:39:49 +0000 (0:00:01.102) 0:01:42.983 **** 2025-09-18 10:44:28.962955 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.962965 | orchestrator | 2025-09-18 10:44:28.962974 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-09-18 10:44:28.962984 | orchestrator | Thursday 18 September 2025 10:39:49 +0000 (0:00:00.685) 0:01:43.668 **** 2025-09-18 10:44:28.962994 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.963010 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963021 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.963064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963079 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963090 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963105 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.963115 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963160 | orchestrator | 2025-09-18 10:44:28.963171 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-09-18 10:44:28.963181 | orchestrator | Thursday 18 September 2025 10:39:53 +0000 (0:00:03.516) 0:01:47.185 **** 2025-09-18 10:44:28.963191 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.963201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963247 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.963257 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.963268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963293 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963304 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.963318 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.963334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963355 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963365 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.963375 | orchestrator | 2025-09-18 10:44:28.963384 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-09-18 10:44:28.963394 | orchestrator | Thursday 18 September 2025 10:39:54 +0000 (0:00:01.056) 0:01:48.241 **** 2025-09-18 10:44:28.963404 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963419 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963430 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.963440 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963460 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.963470 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-09-18 10:44:28.963500 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.963510 | orchestrator | 2025-09-18 10:44:28.963519 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-09-18 10:44:28.963529 | orchestrator | Thursday 18 September 2025 10:39:55 +0000 (0:00:01.215) 0:01:49.456 **** 2025-09-18 10:44:28.963539 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.963548 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.963558 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.963568 | orchestrator | 2025-09-18 10:44:28.963578 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-09-18 10:44:28.963588 | orchestrator | Thursday 18 September 2025 10:39:57 +0000 (0:00:01.577) 0:01:51.034 **** 2025-09-18 10:44:28.963597 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.963607 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.963617 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.963626 | orchestrator | 2025-09-18 10:44:28.963636 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-09-18 10:44:28.963645 | orchestrator | Thursday 18 September 2025 10:39:59 +0000 (0:00:02.118) 0:01:53.152 **** 2025-09-18 10:44:28.963655 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.963665 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.963674 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.963684 | orchestrator | 2025-09-18 10:44:28.963694 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-09-18 10:44:28.963703 | orchestrator | Thursday 18 September 2025 10:39:59 +0000 (0:00:00.519) 0:01:53.672 **** 2025-09-18 10:44:28.963713 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.963723 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.963732 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.963742 | orchestrator | 2025-09-18 10:44:28.963751 | orchestrator | TASK [include_role : designate] ************************************************ 2025-09-18 10:44:28.963761 | orchestrator | Thursday 18 September 2025 10:40:00 +0000 (0:00:00.321) 0:01:53.993 **** 2025-09-18 10:44:28.963770 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.963780 | orchestrator | 2025-09-18 10:44:28.963790 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-09-18 10:44:28.963799 | orchestrator | Thursday 18 September 2025 10:40:00 +0000 (0:00:00.785) 0:01:54.779 **** 2025-09-18 10:44:28.963809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:44:28.963825 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:44:28.963840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.963855 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.963866 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:44:28.963876 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963906 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963930 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.963945 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963965 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.963986 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964006 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964017 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964031 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964041 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964072 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964086 | orchestrator | 2025-09-18 10:44:28.964096 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-09-18 10:44:28.964106 | orchestrator | Thursday 18 September 2025 10:40:04 +0000 (0:00:03.754) 0:01:58.534 **** 2025-09-18 10:44:28.964122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:44:28.964140 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.964150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964160 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964171 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:44:28.964202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964212 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.964223 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964233 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964243 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.964253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964268 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964283 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964320 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.964334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:44:28.964344 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:44:28.964354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964380 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964405 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/designate-sink:2024.2', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.964415 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.964425 | orchestrator | 2025-09-18 10:44:28.964439 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-09-18 10:44:28.964449 | orchestrator | Thursday 18 September 2025 10:40:05 +0000 (0:00:00.881) 0:01:59.415 **** 2025-09-18 10:44:28.964459 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964479 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.964489 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964509 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.964524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-09-18 10:44:28.964544 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.964554 | orchestrator | 2025-09-18 10:44:28.964564 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-09-18 10:44:28.964574 | orchestrator | Thursday 18 September 2025 10:40:06 +0000 (0:00:01.005) 0:02:00.421 **** 2025-09-18 10:44:28.964583 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.964593 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.964603 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.964613 | orchestrator | 2025-09-18 10:44:28.964623 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-09-18 10:44:28.964632 | orchestrator | Thursday 18 September 2025 10:40:07 +0000 (0:00:01.271) 0:02:01.692 **** 2025-09-18 10:44:28.964642 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.964651 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.964661 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.964670 | orchestrator | 2025-09-18 10:44:28.964680 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-09-18 10:44:28.964690 | orchestrator | Thursday 18 September 2025 10:40:09 +0000 (0:00:02.144) 0:02:03.837 **** 2025-09-18 10:44:28.964700 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.964709 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.964719 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.964729 | orchestrator | 2025-09-18 10:44:28.964738 | orchestrator | TASK [include_role : glance] *************************************************** 2025-09-18 10:44:28.964748 | orchestrator | Thursday 18 September 2025 10:40:10 +0000 (0:00:00.561) 0:02:04.399 **** 2025-09-18 10:44:28.964758 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.964767 | orchestrator | 2025-09-18 10:44:28.964777 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-09-18 10:44:28.964787 | orchestrator | Thursday 18 September 2025 10:40:11 +0000 (0:00:00.858) 0:02:05.258 **** 2025-09-18 10:44:28.964809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:44:28.964828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965013 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:44:28.965040 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:44:28.965082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965099 | orchestrator | 2025-09-18 10:44:28.965109 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-09-18 10:44:28.965119 | orchestrator | Thursday 18 September 2025 10:40:15 +0000 (0:00:04.254) 0:02:09.512 **** 2025-09-18 10:44:28.965135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:44:28.965151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965171 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.965182 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:44:28.965204 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965220 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.965231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:44:28.965249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/glance-tls-proxy:2024.2', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.965264 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.965275 | orchestrator | 2025-09-18 10:44:28.965289 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-09-18 10:44:28.965299 | orchestrator | Thursday 18 September 2025 10:40:18 +0000 (0:00:03.152) 0:02:12.664 **** 2025-09-18 10:44:28.965309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965330 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.965340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965351 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965361 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.965371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965386 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-09-18 10:44:28.965396 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.965407 | orchestrator | 2025-09-18 10:44:28.965417 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-09-18 10:44:28.965432 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:03.152) 0:02:15.817 **** 2025-09-18 10:44:28.965442 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.965452 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.965462 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.965472 | orchestrator | 2025-09-18 10:44:28.965482 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-09-18 10:44:28.965491 | orchestrator | Thursday 18 September 2025 10:40:23 +0000 (0:00:01.243) 0:02:17.060 **** 2025-09-18 10:44:28.965519 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.965529 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.965539 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.965549 | orchestrator | 2025-09-18 10:44:28.965558 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-09-18 10:44:28.965568 | orchestrator | Thursday 18 September 2025 10:40:25 +0000 (0:00:02.099) 0:02:19.160 **** 2025-09-18 10:44:28.965582 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.965592 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.965602 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.965612 | orchestrator | 2025-09-18 10:44:28.965622 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-09-18 10:44:28.965632 | orchestrator | Thursday 18 September 2025 10:40:25 +0000 (0:00:00.415) 0:02:19.575 **** 2025-09-18 10:44:28.965641 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.965651 | orchestrator | 2025-09-18 10:44:28.965661 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-09-18 10:44:28.965670 | orchestrator | Thursday 18 September 2025 10:40:26 +0000 (0:00:00.833) 0:02:20.409 **** 2025-09-18 10:44:28.965681 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 10:44:28.965692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 10:44:28.965703 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 10:44:28.965713 | orchestrator | 2025-09-18 10:44:28.965723 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-09-18 10:44:28.965738 | orchestrator | Thursday 18 September 2025 10:40:29 +0000 (0:00:03.031) 0:02:23.441 **** 2025-09-18 10:44:28.965754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 10:44:28.965765 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.965775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 10:44:28.965785 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.965800 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 10:44:28.965810 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.965820 | orchestrator | 2025-09-18 10:44:28.965830 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-09-18 10:44:28.965840 | orchestrator | Thursday 18 September 2025 10:40:30 +0000 (0:00:00.539) 0:02:23.980 **** 2025-09-18 10:44:28.965850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965870 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965880 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965890 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.965900 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.965959 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-09-18 10:44:28.965986 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.965996 | orchestrator | 2025-09-18 10:44:28.966006 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-09-18 10:44:28.966076 | orchestrator | Thursday 18 September 2025 10:40:30 +0000 (0:00:00.711) 0:02:24.692 **** 2025-09-18 10:44:28.966090 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.966100 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.966109 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.966119 | orchestrator | 2025-09-18 10:44:28.966129 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-09-18 10:44:28.966139 | orchestrator | Thursday 18 September 2025 10:40:32 +0000 (0:00:01.260) 0:02:25.953 **** 2025-09-18 10:44:28.966149 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.966172 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.966182 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.966192 | orchestrator | 2025-09-18 10:44:28.966202 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-09-18 10:44:28.966212 | orchestrator | Thursday 18 September 2025 10:40:34 +0000 (0:00:02.044) 0:02:27.998 **** 2025-09-18 10:44:28.966221 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.966231 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.966257 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.966268 | orchestrator | 2025-09-18 10:44:28.966278 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-09-18 10:44:28.966288 | orchestrator | Thursday 18 September 2025 10:40:34 +0000 (0:00:00.420) 0:02:28.418 **** 2025-09-18 10:44:28.966298 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.966307 | orchestrator | 2025-09-18 10:44:28.966317 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-09-18 10:44:28.966327 | orchestrator | Thursday 18 September 2025 10:40:35 +0000 (0:00:00.815) 0:02:29.234 **** 2025-09-18 10:44:28.966354 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:44:28.966389 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:44:28.966407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:44:28.966426 | orchestrator | 2025-09-18 10:44:28.966436 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-09-18 10:44:28.966446 | orchestrator | Thursday 18 September 2025 10:40:39 +0000 (0:00:04.064) 0:02:33.299 **** 2025-09-18 10:44:28.966482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:44:28.966495 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.966508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:44:28.966524 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.966542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:44:28.966551 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.966560 | orchestrator | 2025-09-18 10:44:28.966577 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-09-18 10:44:28.966585 | orchestrator | Thursday 18 September 2025 10:40:40 +0000 (0:00:01.491) 0:02:34.790 **** 2025-09-18 10:44:28.966593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966615 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966631 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966640 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966704 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 10:44:28.966713 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 10:44:28.966722 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.966730 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.966738 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966776 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-09-18 10:44:28.966789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-09-18 10:44:28.966797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-09-18 10:44:28.966806 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.966814 | orchestrator | 2025-09-18 10:44:28.966829 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-09-18 10:44:28.966838 | orchestrator | Thursday 18 September 2025 10:40:41 +0000 (0:00:01.012) 0:02:35.803 **** 2025-09-18 10:44:28.966846 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.966854 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.966862 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.966870 | orchestrator | 2025-09-18 10:44:28.966878 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-09-18 10:44:28.966886 | orchestrator | Thursday 18 September 2025 10:40:43 +0000 (0:00:01.472) 0:02:37.276 **** 2025-09-18 10:44:28.966893 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.966901 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.966909 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.966932 | orchestrator | 2025-09-18 10:44:28.966940 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-09-18 10:44:28.966948 | orchestrator | Thursday 18 September 2025 10:40:45 +0000 (0:00:02.071) 0:02:39.347 **** 2025-09-18 10:44:28.966964 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.966972 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.966980 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.966988 | orchestrator | 2025-09-18 10:44:28.966996 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-09-18 10:44:28.967004 | orchestrator | Thursday 18 September 2025 10:40:45 +0000 (0:00:00.317) 0:02:39.664 **** 2025-09-18 10:44:28.967012 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.967028 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.967037 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.967045 | orchestrator | 2025-09-18 10:44:28.967053 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-09-18 10:44:28.967061 | orchestrator | Thursday 18 September 2025 10:40:46 +0000 (0:00:00.731) 0:02:40.395 **** 2025-09-18 10:44:28.967069 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.967076 | orchestrator | 2025-09-18 10:44:28.967084 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-09-18 10:44:28.967092 | orchestrator | Thursday 18 September 2025 10:40:47 +0000 (0:00:01.018) 0:02:41.414 **** 2025-09-18 10:44:28.967114 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:44:28.967129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967142 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967152 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:44:28.967161 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967193 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:44:28.967204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967230 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967238 | orchestrator | 2025-09-18 10:44:28.967246 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-09-18 10:44:28.967263 | orchestrator | Thursday 18 September 2025 10:40:51 +0000 (0:00:04.184) 0:02:45.599 **** 2025-09-18 10:44:28.967271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:44:28.967280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967314 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.967326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:44:28.967335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967352 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.967361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:44:28.967388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:44:28.967402 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:44:28.967411 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.967419 | orchestrator | 2025-09-18 10:44:28.967428 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-09-18 10:44:28.967436 | orchestrator | Thursday 18 September 2025 10:40:52 +0000 (0:00:00.985) 0:02:46.584 **** 2025-09-18 10:44:28.967456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.967483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967500 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.967508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}})  2025-09-18 10:44:28.967526 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.967547 | orchestrator | 2025-09-18 10:44:28.967555 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-09-18 10:44:28.967563 | orchestrator | Thursday 18 September 2025 10:40:53 +0000 (0:00:00.803) 0:02:47.387 **** 2025-09-18 10:44:28.967571 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.967579 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.967587 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.967595 | orchestrator | 2025-09-18 10:44:28.967603 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-09-18 10:44:28.967611 | orchestrator | Thursday 18 September 2025 10:40:54 +0000 (0:00:01.300) 0:02:48.688 **** 2025-09-18 10:44:28.967619 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.967627 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.967640 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.967657 | orchestrator | 2025-09-18 10:44:28.967665 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-09-18 10:44:28.967673 | orchestrator | Thursday 18 September 2025 10:40:56 +0000 (0:00:02.149) 0:02:50.837 **** 2025-09-18 10:44:28.967681 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.967689 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.967697 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.967705 | orchestrator | 2025-09-18 10:44:28.967713 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-09-18 10:44:28.967721 | orchestrator | Thursday 18 September 2025 10:40:57 +0000 (0:00:00.606) 0:02:51.443 **** 2025-09-18 10:44:28.967737 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.967745 | orchestrator | 2025-09-18 10:44:28.967753 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-09-18 10:44:28.967761 | orchestrator | Thursday 18 September 2025 10:40:58 +0000 (0:00:01.034) 0:02:52.478 **** 2025-09-18 10:44:28.967782 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:44:28.967796 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.967804 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:44:28.967813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.967832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:44:28.967861 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.967871 | orchestrator | 2025-09-18 10:44:28.967879 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-09-18 10:44:28.967887 | orchestrator | Thursday 18 September 2025 10:41:02 +0000 (0:00:04.007) 0:02:56.486 **** 2025-09-18 10:44:28.967899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:44:28.967908 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:44:28.967943 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.967952 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.967972 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.967981 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.967990 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:44:28.968002 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968010 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.968019 | orchestrator | 2025-09-18 10:44:28.968027 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-09-18 10:44:28.968035 | orchestrator | Thursday 18 September 2025 10:41:03 +0000 (0:00:01.024) 0:02:57.510 **** 2025-09-18 10:44:28.968052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968061 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968074 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.968083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968099 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.968108 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968116 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-09-18 10:44:28.968124 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.968141 | orchestrator | 2025-09-18 10:44:28.968150 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-09-18 10:44:28.968158 | orchestrator | Thursday 18 September 2025 10:41:04 +0000 (0:00:00.930) 0:02:58.441 **** 2025-09-18 10:44:28.968166 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.968174 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.968181 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.968190 | orchestrator | 2025-09-18 10:44:28.968197 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-09-18 10:44:28.968213 | orchestrator | Thursday 18 September 2025 10:41:06 +0000 (0:00:01.439) 0:02:59.880 **** 2025-09-18 10:44:28.968221 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.968229 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.968236 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.968244 | orchestrator | 2025-09-18 10:44:28.968252 | orchestrator | TASK [include_role : manila] *************************************************** 2025-09-18 10:44:28.968260 | orchestrator | Thursday 18 September 2025 10:41:08 +0000 (0:00:02.024) 0:03:01.904 **** 2025-09-18 10:44:28.968279 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.968288 | orchestrator | 2025-09-18 10:44:28.968304 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-09-18 10:44:28.968313 | orchestrator | Thursday 18 September 2025 10:41:09 +0000 (0:00:01.252) 0:03:03.157 **** 2025-09-18 10:44:28.968321 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 10:44:28.968330 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 10:44:28.968344 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968390 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968412 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968438 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-09-18 10:44:28.968446 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968484 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968493 | orchestrator | 2025-09-18 10:44:28.968501 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-09-18 10:44:28.968509 | orchestrator | Thursday 18 September 2025 10:41:13 +0000 (0:00:03.824) 0:03:06.982 **** 2025-09-18 10:44:28.968517 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 10:44:28.968534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968560 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968568 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.968577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 10:44:28.968596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968622 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968630 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.968639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/manila-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-09-18 10:44:28.968647 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/manila-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/manila-share:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968682 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/manila-data:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.968691 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.968700 | orchestrator | 2025-09-18 10:44:28.968708 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-09-18 10:44:28.968716 | orchestrator | Thursday 18 September 2025 10:41:13 +0000 (0:00:00.620) 0:03:07.602 **** 2025-09-18 10:44:28.968724 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968753 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.968761 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968772 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968781 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.968789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968797 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-09-18 10:44:28.968816 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.968825 | orchestrator | 2025-09-18 10:44:28.968833 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-09-18 10:44:28.968841 | orchestrator | Thursday 18 September 2025 10:41:14 +0000 (0:00:01.178) 0:03:08.780 **** 2025-09-18 10:44:28.968849 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.968857 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.968865 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.968874 | orchestrator | 2025-09-18 10:44:28.968882 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-09-18 10:44:28.968889 | orchestrator | Thursday 18 September 2025 10:41:16 +0000 (0:00:01.286) 0:03:10.067 **** 2025-09-18 10:44:28.968897 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.968905 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.968949 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.968958 | orchestrator | 2025-09-18 10:44:28.968979 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-09-18 10:44:28.968987 | orchestrator | Thursday 18 September 2025 10:41:18 +0000 (0:00:02.226) 0:03:12.293 **** 2025-09-18 10:44:28.968996 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.969003 | orchestrator | 2025-09-18 10:44:28.969011 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-09-18 10:44:28.969020 | orchestrator | Thursday 18 September 2025 10:41:19 +0000 (0:00:01.354) 0:03:13.648 **** 2025-09-18 10:44:28.969028 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 10:44:28.969036 | orchestrator | 2025-09-18 10:44:28.969044 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-09-18 10:44:28.969052 | orchestrator | Thursday 18 September 2025 10:41:22 +0000 (0:00:03.023) 0:03:16.671 **** 2025-09-18 10:44:28.969075 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969102 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969129 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969175 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969182 | orchestrator | 2025-09-18 10:44:28.969189 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-09-18 10:44:28.969195 | orchestrator | Thursday 18 September 2025 10:41:25 +0000 (0:00:02.222) 0:03:18.894 **** 2025-09-18 10:44:28.969203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969235 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969245 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969260 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969284 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:44:28.969299 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': False, 'image': 'registry.osism.tech/kolla/mariadb-clustercheck:2024.2', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-09-18 10:44:28.969307 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969313 | orchestrator | 2025-09-18 10:44:28.969320 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-09-18 10:44:28.969327 | orchestrator | Thursday 18 September 2025 10:41:27 +0000 (0:00:02.609) 0:03:21.504 **** 2025-09-18 10:44:28.969341 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969356 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969363 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969374 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969381 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969398 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}})  2025-09-18 10:44:28.969416 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969423 | orchestrator | 2025-09-18 10:44:28.969429 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-09-18 10:44:28.969436 | orchestrator | Thursday 18 September 2025 10:41:30 +0000 (0:00:03.150) 0:03:24.654 **** 2025-09-18 10:44:28.969443 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.969450 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.969456 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.969463 | orchestrator | 2025-09-18 10:44:28.969470 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-09-18 10:44:28.969476 | orchestrator | Thursday 18 September 2025 10:41:32 +0000 (0:00:01.852) 0:03:26.507 **** 2025-09-18 10:44:28.969483 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969490 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969504 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969511 | orchestrator | 2025-09-18 10:44:28.969530 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-09-18 10:44:28.969537 | orchestrator | Thursday 18 September 2025 10:41:34 +0000 (0:00:01.516) 0:03:28.023 **** 2025-09-18 10:44:28.969543 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969550 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969557 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969563 | orchestrator | 2025-09-18 10:44:28.969570 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-09-18 10:44:28.969577 | orchestrator | Thursday 18 September 2025 10:41:34 +0000 (0:00:00.387) 0:03:28.411 **** 2025-09-18 10:44:28.969588 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.969595 | orchestrator | 2025-09-18 10:44:28.969602 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-09-18 10:44:28.969608 | orchestrator | Thursday 18 September 2025 10:41:35 +0000 (0:00:01.412) 0:03:29.824 **** 2025-09-18 10:44:28.969616 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 10:44:28.969623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 10:44:28.969642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-09-18 10:44:28.969650 | orchestrator | 2025-09-18 10:44:28.969657 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-09-18 10:44:28.969663 | orchestrator | Thursday 18 September 2025 10:41:37 +0000 (0:00:01.622) 0:03:31.446 **** 2025-09-18 10:44:28.969674 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 10:44:28.969681 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 10:44:28.969718 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/memcached:2024.2', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-09-18 10:44:28.969733 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969740 | orchestrator | 2025-09-18 10:44:28.969746 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-09-18 10:44:28.969753 | orchestrator | Thursday 18 September 2025 10:41:37 +0000 (0:00:00.397) 0:03:31.843 **** 2025-09-18 10:44:28.969760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 10:44:28.969767 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969774 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 10:44:28.969781 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-09-18 10:44:28.969807 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969821 | orchestrator | 2025-09-18 10:44:28.969828 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-09-18 10:44:28.969835 | orchestrator | Thursday 18 September 2025 10:41:38 +0000 (0:00:00.921) 0:03:32.765 **** 2025-09-18 10:44:28.969841 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969848 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969855 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969861 | orchestrator | 2025-09-18 10:44:28.969876 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-09-18 10:44:28.969883 | orchestrator | Thursday 18 September 2025 10:41:39 +0000 (0:00:00.462) 0:03:33.227 **** 2025-09-18 10:44:28.969890 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969896 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969903 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969909 | orchestrator | 2025-09-18 10:44:28.969925 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-09-18 10:44:28.969932 | orchestrator | Thursday 18 September 2025 10:41:40 +0000 (0:00:01.354) 0:03:34.581 **** 2025-09-18 10:44:28.969939 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.969946 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.969957 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.969964 | orchestrator | 2025-09-18 10:44:28.969971 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-09-18 10:44:28.969977 | orchestrator | Thursday 18 September 2025 10:41:41 +0000 (0:00:00.348) 0:03:34.930 **** 2025-09-18 10:44:28.969987 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.969994 | orchestrator | 2025-09-18 10:44:28.970001 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-09-18 10:44:28.970007 | orchestrator | Thursday 18 September 2025 10:41:42 +0000 (0:00:01.537) 0:03:36.468 **** 2025-09-18 10:44:28.970039 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:44:28.970048 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970055 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.970099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970107 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:44:28.970151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970166 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970181 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970189 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970216 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970225 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.970247 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970254 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.970287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970317 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:44:28.970325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970332 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970368 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970379 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970387 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970402 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970410 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.970449 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.970465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970479 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970487 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970542 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970564 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970593 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.970601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970608 | orchestrator | 2025-09-18 10:44:28.970620 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-09-18 10:44:28.970628 | orchestrator | Thursday 18 September 2025 10:41:47 +0000 (0:00:04.426) 0:03:40.895 **** 2025-09-18 10:44:28.970635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:44:28.970642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970687 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.970713 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970728 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970746 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970796 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:44:28.970803 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970811 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970881 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.970889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970896 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.970907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.970950 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.970976 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.970984 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.970995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:44:28.971003 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971010 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/neutron-openvswitch-agent:2024.2', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971022 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971039 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/neutron-linuxbridge-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971047 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.971073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/neutron-dhcp-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971093 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/neutron-l3-agent:2024.2', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-09-18 10:44:28.971114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971131 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/neutron-sriov-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/neutron-mlnx-agent:2024.2', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971156 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971164 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/neutron-eswitchd:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.971183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.971224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.971231 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.971238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/neutron-bgp-dragent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971249 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/neutron-infoblox-ipam-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971256 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/neutron-metering-agent:2024.2', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}}})  2025-09-18 10:44:28.971263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/ironic-neutron-agent:2024.2', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.971327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/neutron-tls-proxy:2024.2', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-09-18 10:44:28.971356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/neutron-ovn-agent:2024.2', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:44:28.971364 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.971378 | orchestrator | 2025-09-18 10:44:28.971385 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-09-18 10:44:28.971392 | orchestrator | Thursday 18 September 2025 10:41:48 +0000 (0:00:01.490) 0:03:42.386 **** 2025-09-18 10:44:28.971399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971406 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971417 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.971424 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971431 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971438 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.971444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-09-18 10:44:28.971458 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.971465 | orchestrator | 2025-09-18 10:44:28.971479 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-09-18 10:44:28.971486 | orchestrator | Thursday 18 September 2025 10:41:50 +0000 (0:00:02.138) 0:03:44.525 **** 2025-09-18 10:44:28.971493 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.971500 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.971506 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.971513 | orchestrator | 2025-09-18 10:44:28.971519 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-09-18 10:44:28.971526 | orchestrator | Thursday 18 September 2025 10:41:52 +0000 (0:00:01.398) 0:03:45.923 **** 2025-09-18 10:44:28.971533 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.971539 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.971546 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.971553 | orchestrator | 2025-09-18 10:44:28.971559 | orchestrator | TASK [include_role : placement] ************************************************ 2025-09-18 10:44:28.971566 | orchestrator | Thursday 18 September 2025 10:41:53 +0000 (0:00:01.937) 0:03:47.861 **** 2025-09-18 10:44:28.971572 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.971579 | orchestrator | 2025-09-18 10:44:28.971586 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-09-18 10:44:28.971592 | orchestrator | Thursday 18 September 2025 10:41:55 +0000 (0:00:01.150) 0:03:49.011 **** 2025-09-18 10:44:28.971613 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.971624 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.971636 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.971643 | orchestrator | 2025-09-18 10:44:28.971650 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-09-18 10:44:28.971658 | orchestrator | Thursday 18 September 2025 10:41:58 +0000 (0:00:03.295) 0:03:52.307 **** 2025-09-18 10:44:28.971665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.971672 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.971690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.971698 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.971705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.971719 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.971726 | orchestrator | 2025-09-18 10:44:28.971733 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-09-18 10:44:28.971740 | orchestrator | Thursday 18 September 2025 10:41:58 +0000 (0:00:00.510) 0:03:52.817 **** 2025-09-18 10:44:28.971748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971763 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.971770 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971777 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971784 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.971791 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971799 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-09-18 10:44:28.971806 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.971813 | orchestrator | 2025-09-18 10:44:28.971820 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-09-18 10:44:28.971827 | orchestrator | Thursday 18 September 2025 10:41:59 +0000 (0:00:00.670) 0:03:53.487 **** 2025-09-18 10:44:28.971834 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.971841 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.971847 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.971854 | orchestrator | 2025-09-18 10:44:28.971862 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-09-18 10:44:28.971869 | orchestrator | Thursday 18 September 2025 10:42:01 +0000 (0:00:01.395) 0:03:54.883 **** 2025-09-18 10:44:28.971876 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.971882 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.971889 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.971896 | orchestrator | 2025-09-18 10:44:28.971903 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-09-18 10:44:28.971943 | orchestrator | Thursday 18 September 2025 10:42:03 +0000 (0:00:02.339) 0:03:57.223 **** 2025-09-18 10:44:28.971952 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.971959 | orchestrator | 2025-09-18 10:44:28.971966 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-09-18 10:44:28.971973 | orchestrator | Thursday 18 September 2025 10:42:04 +0000 (0:00:01.611) 0:03:58.834 **** 2025-09-18 10:44:28.971993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.972010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972018 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.972034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972056 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.972068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972075 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972090 | orchestrator | 2025-09-18 10:44:28.972097 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-09-18 10:44:28.972104 | orchestrator | Thursday 18 September 2025 10:42:09 +0000 (0:00:04.518) 0:04:03.353 **** 2025-09-18 10:44:28.972122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.972136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972146 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972153 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972161 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.972169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972176 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972187 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972206 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.972219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/nova-super-conductor:2024.2', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.972234 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972241 | orchestrator | 2025-09-18 10:44:28.972248 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-09-18 10:44:28.972255 | orchestrator | Thursday 18 September 2025 10:42:10 +0000 (0:00:01.298) 0:04:04.652 **** 2025-09-18 10:44:28.972263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972289 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972297 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972304 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972318 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972344 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972359 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972373 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-09-18 10:44:28.972380 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972387 | orchestrator | 2025-09-18 10:44:28.972394 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-09-18 10:44:28.972401 | orchestrator | Thursday 18 September 2025 10:42:11 +0000 (0:00:00.933) 0:04:05.585 **** 2025-09-18 10:44:28.972411 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.972418 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.972425 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.972432 | orchestrator | 2025-09-18 10:44:28.972439 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-09-18 10:44:28.972446 | orchestrator | Thursday 18 September 2025 10:42:13 +0000 (0:00:01.462) 0:04:07.048 **** 2025-09-18 10:44:28.972453 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.972460 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.972467 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.972473 | orchestrator | 2025-09-18 10:44:28.972480 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-09-18 10:44:28.972486 | orchestrator | Thursday 18 September 2025 10:42:15 +0000 (0:00:02.245) 0:04:09.293 **** 2025-09-18 10:44:28.972493 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.972499 | orchestrator | 2025-09-18 10:44:28.972505 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-09-18 10:44:28.972512 | orchestrator | Thursday 18 September 2025 10:42:16 +0000 (0:00:01.532) 0:04:10.826 **** 2025-09-18 10:44:28.972518 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-09-18 10:44:28.972528 | orchestrator | 2025-09-18 10:44:28.972535 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-09-18 10:44:28.972541 | orchestrator | Thursday 18 September 2025 10:42:17 +0000 (0:00:00.748) 0:04:11.574 **** 2025-09-18 10:44:28.972548 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 10:44:28.972555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 10:44:28.972562 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-09-18 10:44:28.972569 | orchestrator | 2025-09-18 10:44:28.972575 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-09-18 10:44:28.972582 | orchestrator | Thursday 18 September 2025 10:42:21 +0000 (0:00:03.900) 0:04:15.475 **** 2025-09-18 10:44:28.972598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972606 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972619 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972635 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972645 | orchestrator | 2025-09-18 10:44:28.972652 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-09-18 10:44:28.972658 | orchestrator | Thursday 18 September 2025 10:42:22 +0000 (0:00:01.226) 0:04:16.701 **** 2025-09-18 10:44:28.972665 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972672 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972679 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972699 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-09-18 10:44:28.972719 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972726 | orchestrator | 2025-09-18 10:44:28.972732 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 10:44:28.972739 | orchestrator | Thursday 18 September 2025 10:42:24 +0000 (0:00:01.356) 0:04:18.057 **** 2025-09-18 10:44:28.972745 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.972752 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.972758 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.972764 | orchestrator | 2025-09-18 10:44:28.972771 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 10:44:28.972777 | orchestrator | Thursday 18 September 2025 10:42:26 +0000 (0:00:02.634) 0:04:20.691 **** 2025-09-18 10:44:28.972784 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.972790 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.972796 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.972803 | orchestrator | 2025-09-18 10:44:28.972809 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-09-18 10:44:28.972816 | orchestrator | Thursday 18 September 2025 10:42:29 +0000 (0:00:03.067) 0:04:23.759 **** 2025-09-18 10:44:28.972832 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-09-18 10:44:28.972840 | orchestrator | 2025-09-18 10:44:28.972846 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-09-18 10:44:28.972853 | orchestrator | Thursday 18 September 2025 10:42:31 +0000 (0:00:01.688) 0:04:25.447 **** 2025-09-18 10:44:28.972859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972870 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972887 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972900 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972907 | orchestrator | 2025-09-18 10:44:28.972923 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-09-18 10:44:28.972930 | orchestrator | Thursday 18 September 2025 10:42:32 +0000 (0:00:01.344) 0:04:26.792 **** 2025-09-18 10:44:28.972937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972944 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.972950 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972957 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.972964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-09-18 10:44:28.972970 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.972977 | orchestrator | 2025-09-18 10:44:28.972983 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-09-18 10:44:28.972990 | orchestrator | Thursday 18 September 2025 10:42:34 +0000 (0:00:01.298) 0:04:28.091 **** 2025-09-18 10:44:28.972996 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973003 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973009 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973019 | orchestrator | 2025-09-18 10:44:28.973036 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 10:44:28.973043 | orchestrator | Thursday 18 September 2025 10:42:36 +0000 (0:00:01.928) 0:04:30.020 **** 2025-09-18 10:44:28.973049 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.973056 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.973063 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.973069 | orchestrator | 2025-09-18 10:44:28.973076 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 10:44:28.973082 | orchestrator | Thursday 18 September 2025 10:42:38 +0000 (0:00:02.397) 0:04:32.417 **** 2025-09-18 10:44:28.973089 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.973095 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.973102 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.973108 | orchestrator | 2025-09-18 10:44:28.973115 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-09-18 10:44:28.973121 | orchestrator | Thursday 18 September 2025 10:42:41 +0000 (0:00:02.985) 0:04:35.403 **** 2025-09-18 10:44:28.973128 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-09-18 10:44:28.973134 | orchestrator | 2025-09-18 10:44:28.973141 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-09-18 10:44:28.973147 | orchestrator | Thursday 18 September 2025 10:42:42 +0000 (0:00:00.873) 0:04:36.276 **** 2025-09-18 10:44:28.973157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973164 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973177 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973191 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973197 | orchestrator | 2025-09-18 10:44:28.973204 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-09-18 10:44:28.973211 | orchestrator | Thursday 18 September 2025 10:42:43 +0000 (0:00:01.342) 0:04:37.619 **** 2025-09-18 10:44:28.973217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973227 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973234 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973241 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-09-18 10:44:28.973265 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973272 | orchestrator | 2025-09-18 10:44:28.973279 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-09-18 10:44:28.973285 | orchestrator | Thursday 18 September 2025 10:42:45 +0000 (0:00:01.521) 0:04:39.141 **** 2025-09-18 10:44:28.973292 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973298 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973305 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973311 | orchestrator | 2025-09-18 10:44:28.973318 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-09-18 10:44:28.973324 | orchestrator | Thursday 18 September 2025 10:42:46 +0000 (0:00:01.641) 0:04:40.782 **** 2025-09-18 10:44:28.973331 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.973338 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.973344 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.973350 | orchestrator | 2025-09-18 10:44:28.973359 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-09-18 10:44:28.973366 | orchestrator | Thursday 18 September 2025 10:42:49 +0000 (0:00:02.467) 0:04:43.249 **** 2025-09-18 10:44:28.973373 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.973379 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.973386 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.973392 | orchestrator | 2025-09-18 10:44:28.973399 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-09-18 10:44:28.973406 | orchestrator | Thursday 18 September 2025 10:42:52 +0000 (0:00:03.048) 0:04:46.297 **** 2025-09-18 10:44:28.973412 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.973419 | orchestrator | 2025-09-18 10:44:28.973425 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-09-18 10:44:28.973432 | orchestrator | Thursday 18 September 2025 10:42:53 +0000 (0:00:01.418) 0:04:47.716 **** 2025-09-18 10:44:28.973439 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.973449 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973456 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973491 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.973498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973508 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.973531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973557 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973575 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973582 | orchestrator | 2025-09-18 10:44:28.973588 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-09-18 10:44:28.973595 | orchestrator | Thursday 18 September 2025 10:42:57 +0000 (0:00:03.237) 0:04:50.954 **** 2025-09-18 10:44:28.973611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.973619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973646 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973653 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.973675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973683 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973710 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973717 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.973723 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 10:44:28.973730 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973746 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 10:44:28.973754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:44:28.973761 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973767 | orchestrator | 2025-09-18 10:44:28.973776 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-09-18 10:44:28.973783 | orchestrator | Thursday 18 September 2025 10:42:57 +0000 (0:00:00.744) 0:04:51.699 **** 2025-09-18 10:44:28.973793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973800 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973807 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.973813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973827 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.973833 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973840 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-09-18 10:44:28.973847 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.973853 | orchestrator | 2025-09-18 10:44:28.973860 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-09-18 10:44:28.973866 | orchestrator | Thursday 18 September 2025 10:42:59 +0000 (0:00:01.530) 0:04:53.230 **** 2025-09-18 10:44:28.973873 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.973879 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.973885 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.973892 | orchestrator | 2025-09-18 10:44:28.973898 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-09-18 10:44:28.973905 | orchestrator | Thursday 18 September 2025 10:43:00 +0000 (0:00:01.442) 0:04:54.673 **** 2025-09-18 10:44:28.973920 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.973927 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.973934 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.973940 | orchestrator | 2025-09-18 10:44:28.973947 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-09-18 10:44:28.973954 | orchestrator | Thursday 18 September 2025 10:43:03 +0000 (0:00:02.217) 0:04:56.890 **** 2025-09-18 10:44:28.973960 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.973967 | orchestrator | 2025-09-18 10:44:28.973973 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-09-18 10:44:28.973980 | orchestrator | Thursday 18 September 2025 10:43:04 +0000 (0:00:01.409) 0:04:58.300 **** 2025-09-18 10:44:28.973996 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:44:28.974007 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:44:28.974049 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:44:28.974058 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:44:28.974076 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:44:28.974085 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:44:28.974096 | orchestrator | 2025-09-18 10:44:28.974106 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-09-18 10:44:28.974112 | orchestrator | Thursday 18 September 2025 10:43:10 +0000 (0:00:05.687) 0:05:03.987 **** 2025-09-18 10:44:28.974119 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:44:28.974126 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:44:28.974133 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.974140 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:44:28.974157 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:44:28.974170 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.974179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:44:28.974187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:44:28.974194 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.974200 | orchestrator | 2025-09-18 10:44:28.974207 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-09-18 10:44:28.974213 | orchestrator | Thursday 18 September 2025 10:43:10 +0000 (0:00:00.668) 0:05:04.656 **** 2025-09-18 10:44:28.974219 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 10:44:28.974226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974243 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.974250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 10:44:28.974266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974280 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.974287 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-09-18 10:44:28.974294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-09-18 10:44:28.974312 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.974318 | orchestrator | 2025-09-18 10:44:28.974325 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-09-18 10:44:28.974331 | orchestrator | Thursday 18 September 2025 10:43:11 +0000 (0:00:00.936) 0:05:05.593 **** 2025-09-18 10:44:28.974338 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.974344 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.974351 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.974357 | orchestrator | 2025-09-18 10:44:28.974364 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-09-18 10:44:28.974370 | orchestrator | Thursday 18 September 2025 10:43:12 +0000 (0:00:00.887) 0:05:06.481 **** 2025-09-18 10:44:28.974377 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.974384 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.974390 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.974397 | orchestrator | 2025-09-18 10:44:28.974403 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-09-18 10:44:28.974410 | orchestrator | Thursday 18 September 2025 10:43:13 +0000 (0:00:01.370) 0:05:07.851 **** 2025-09-18 10:44:28.974416 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.974423 | orchestrator | 2025-09-18 10:44:28.974430 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-09-18 10:44:28.974436 | orchestrator | Thursday 18 September 2025 10:43:15 +0000 (0:00:01.427) 0:05:09.278 **** 2025-09-18 10:44:28.974443 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:44:28.974454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974461 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974478 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:44:28.974488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974495 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974502 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974543 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:44:28.974550 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:44:28.974596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.974603 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974620 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:44:28.974638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.974648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:44:28.974656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974665 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.974672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974689 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974714 | orchestrator | 2025-09-18 10:44:28.974720 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-09-18 10:44:28.974727 | orchestrator | Thursday 18 September 2025 10:43:20 +0000 (0:00:04.684) 0:05:13.963 **** 2025-09-18 10:44:28.974736 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 10:44:28.974743 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974750 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974760 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 10:44:28.974785 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.974795 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 10:44:28.974802 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974813 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974819 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974826 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974850 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.974859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 10:44:28.974888 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 10:44:28.974895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:44:28.974902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.974931 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974938 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974951 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.974957 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.974975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 10:44:28.974982 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975022 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/prometheus-openstack-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-09-18 10:44:28.975034 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.975046 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:44:28.975053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:44:28.975059 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975066 | orchestrator | 2025-09-18 10:44:28.975072 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-09-18 10:44:28.975079 | orchestrator | Thursday 18 September 2025 10:43:21 +0000 (0:00:00.998) 0:05:14.961 **** 2025-09-18 10:44:28.975085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975092 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975099 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975113 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975161 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-09-18 10:44:28.975177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-09-18 10:44:28.975191 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975197 | orchestrator | 2025-09-18 10:44:28.975203 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-09-18 10:44:28.975210 | orchestrator | Thursday 18 September 2025 10:43:22 +0000 (0:00:00.902) 0:05:15.864 **** 2025-09-18 10:44:28.975216 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975223 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975229 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975235 | orchestrator | 2025-09-18 10:44:28.975242 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-09-18 10:44:28.975248 | orchestrator | Thursday 18 September 2025 10:43:22 +0000 (0:00:00.432) 0:05:16.296 **** 2025-09-18 10:44:28.975255 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975261 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975267 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975273 | orchestrator | 2025-09-18 10:44:28.975280 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-09-18 10:44:28.975286 | orchestrator | Thursday 18 September 2025 10:43:23 +0000 (0:00:01.239) 0:05:17.535 **** 2025-09-18 10:44:28.975292 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.975299 | orchestrator | 2025-09-18 10:44:28.975305 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-09-18 10:44:28.975312 | orchestrator | Thursday 18 September 2025 10:43:25 +0000 (0:00:01.563) 0:05:19.099 **** 2025-09-18 10:44:28.975318 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:44:28.975330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:44:28.975344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-09-18 10:44:28.975351 | orchestrator | 2025-09-18 10:44:28.975358 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-09-18 10:44:28.975364 | orchestrator | Thursday 18 September 2025 10:43:27 +0000 (0:00:02.397) 0:05:21.496 **** 2025-09-18 10:44:28.975371 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 10:44:28.975378 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975385 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 10:44:28.975395 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/rabbitmq:2024.2', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-09-18 10:44:28.975414 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975420 | orchestrator | 2025-09-18 10:44:28.975427 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-09-18 10:44:28.975433 | orchestrator | Thursday 18 September 2025 10:43:28 +0000 (0:00:00.428) 0:05:21.924 **** 2025-09-18 10:44:28.975442 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 10:44:28.975449 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975455 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 10:44:28.975462 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975468 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-09-18 10:44:28.975474 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975481 | orchestrator | 2025-09-18 10:44:28.975487 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-09-18 10:44:28.975494 | orchestrator | Thursday 18 September 2025 10:43:29 +0000 (0:00:01.134) 0:05:23.059 **** 2025-09-18 10:44:28.975500 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975506 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975513 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975519 | orchestrator | 2025-09-18 10:44:28.975526 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-09-18 10:44:28.975532 | orchestrator | Thursday 18 September 2025 10:43:29 +0000 (0:00:00.470) 0:05:23.530 **** 2025-09-18 10:44:28.975538 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975545 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975551 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975557 | orchestrator | 2025-09-18 10:44:28.975564 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-09-18 10:44:28.975570 | orchestrator | Thursday 18 September 2025 10:43:31 +0000 (0:00:01.450) 0:05:24.981 **** 2025-09-18 10:44:28.975577 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:44:28.975583 | orchestrator | 2025-09-18 10:44:28.975589 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-09-18 10:44:28.975596 | orchestrator | Thursday 18 September 2025 10:43:33 +0000 (0:00:01.903) 0:05:26.885 **** 2025-09-18 10:44:28.975602 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975617 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975627 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975634 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-09-18 10:44:28.975659 | orchestrator | 2025-09-18 10:44:28.975669 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-09-18 10:44:28.975675 | orchestrator | Thursday 18 September 2025 10:43:39 +0000 (0:00:06.353) 0:05:33.238 **** 2025-09-18 10:44:28.975682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975691 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975698 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975722 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975733 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-apiserver:2024.2', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/skyline-console:2024.2', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-09-18 10:44:28.975751 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975758 | orchestrator | 2025-09-18 10:44:28.975764 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-09-18 10:44:28.975771 | orchestrator | Thursday 18 September 2025 10:43:40 +0000 (0:00:00.635) 0:05:33.873 **** 2025-09-18 10:44:28.975777 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975784 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975794 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975801 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975807 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.975814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975827 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975840 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.975847 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975857 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-09-18 10:44:28.975877 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.975883 | orchestrator | 2025-09-18 10:44:28.975890 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-09-18 10:44:28.975896 | orchestrator | Thursday 18 September 2025 10:43:41 +0000 (0:00:01.757) 0:05:35.631 **** 2025-09-18 10:44:28.975903 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.975909 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.975946 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.975953 | orchestrator | 2025-09-18 10:44:28.975959 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-09-18 10:44:28.975965 | orchestrator | Thursday 18 September 2025 10:43:43 +0000 (0:00:01.389) 0:05:37.020 **** 2025-09-18 10:44:28.975972 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.975978 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.975988 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.975994 | orchestrator | 2025-09-18 10:44:28.976001 | orchestrator | TASK [include_role : swift] **************************************************** 2025-09-18 10:44:28.976007 | orchestrator | Thursday 18 September 2025 10:43:45 +0000 (0:00:02.306) 0:05:39.327 **** 2025-09-18 10:44:28.976013 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976020 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976027 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976036 | orchestrator | 2025-09-18 10:44:28.976041 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-09-18 10:44:28.976047 | orchestrator | Thursday 18 September 2025 10:43:45 +0000 (0:00:00.337) 0:05:39.664 **** 2025-09-18 10:44:28.976052 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976058 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976064 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976069 | orchestrator | 2025-09-18 10:44:28.976075 | orchestrator | TASK [include_role : trove] **************************************************** 2025-09-18 10:44:28.976080 | orchestrator | Thursday 18 September 2025 10:43:46 +0000 (0:00:00.365) 0:05:40.030 **** 2025-09-18 10:44:28.976086 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976091 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976097 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976102 | orchestrator | 2025-09-18 10:44:28.976108 | orchestrator | TASK [include_role : venus] **************************************************** 2025-09-18 10:44:28.976114 | orchestrator | Thursday 18 September 2025 10:43:46 +0000 (0:00:00.790) 0:05:40.820 **** 2025-09-18 10:44:28.976119 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976125 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976130 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976136 | orchestrator | 2025-09-18 10:44:28.976141 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-09-18 10:44:28.976147 | orchestrator | Thursday 18 September 2025 10:43:47 +0000 (0:00:00.329) 0:05:41.150 **** 2025-09-18 10:44:28.976152 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976158 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976163 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976169 | orchestrator | 2025-09-18 10:44:28.976174 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-09-18 10:44:28.976180 | orchestrator | Thursday 18 September 2025 10:43:47 +0000 (0:00:00.342) 0:05:41.492 **** 2025-09-18 10:44:28.976186 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976191 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976197 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976202 | orchestrator | 2025-09-18 10:44:28.976208 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-09-18 10:44:28.976213 | orchestrator | Thursday 18 September 2025 10:43:48 +0000 (0:00:00.868) 0:05:42.361 **** 2025-09-18 10:44:28.976219 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976224 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976230 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976235 | orchestrator | 2025-09-18 10:44:28.976241 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-09-18 10:44:28.976247 | orchestrator | Thursday 18 September 2025 10:43:49 +0000 (0:00:00.736) 0:05:43.098 **** 2025-09-18 10:44:28.976252 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976258 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976263 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976269 | orchestrator | 2025-09-18 10:44:28.976275 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-09-18 10:44:28.976280 | orchestrator | Thursday 18 September 2025 10:43:49 +0000 (0:00:00.358) 0:05:43.456 **** 2025-09-18 10:44:28.976286 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976291 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976297 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976302 | orchestrator | 2025-09-18 10:44:28.976308 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-09-18 10:44:28.976314 | orchestrator | Thursday 18 September 2025 10:43:50 +0000 (0:00:00.948) 0:05:44.405 **** 2025-09-18 10:44:28.976319 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976325 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976331 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976336 | orchestrator | 2025-09-18 10:44:28.976342 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-09-18 10:44:28.976350 | orchestrator | Thursday 18 September 2025 10:43:51 +0000 (0:00:01.226) 0:05:45.632 **** 2025-09-18 10:44:28.976356 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976361 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976370 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976376 | orchestrator | 2025-09-18 10:44:28.976382 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-09-18 10:44:28.976387 | orchestrator | Thursday 18 September 2025 10:43:52 +0000 (0:00:00.907) 0:05:46.540 **** 2025-09-18 10:44:28.976393 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.976399 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.976404 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.976410 | orchestrator | 2025-09-18 10:44:28.976416 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-09-18 10:44:28.976421 | orchestrator | Thursday 18 September 2025 10:44:01 +0000 (0:00:08.441) 0:05:54.981 **** 2025-09-18 10:44:28.976427 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976432 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976438 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976444 | orchestrator | 2025-09-18 10:44:28.976449 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-09-18 10:44:28.976455 | orchestrator | Thursday 18 September 2025 10:44:01 +0000 (0:00:00.844) 0:05:55.826 **** 2025-09-18 10:44:28.976461 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.976467 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.976472 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.976478 | orchestrator | 2025-09-18 10:44:28.976483 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-09-18 10:44:28.976489 | orchestrator | Thursday 18 September 2025 10:44:09 +0000 (0:00:07.543) 0:06:03.370 **** 2025-09-18 10:44:28.976495 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976501 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976506 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976512 | orchestrator | 2025-09-18 10:44:28.976520 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-09-18 10:44:28.976526 | orchestrator | Thursday 18 September 2025 10:44:12 +0000 (0:00:03.270) 0:06:06.640 **** 2025-09-18 10:44:28.976531 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:44:28.976537 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:44:28.976543 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:44:28.976548 | orchestrator | 2025-09-18 10:44:28.976554 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-09-18 10:44:28.976560 | orchestrator | Thursday 18 September 2025 10:44:22 +0000 (0:00:09.308) 0:06:15.949 **** 2025-09-18 10:44:28.976565 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976571 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976577 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976582 | orchestrator | 2025-09-18 10:44:28.976588 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-09-18 10:44:28.976594 | orchestrator | Thursday 18 September 2025 10:44:22 +0000 (0:00:00.347) 0:06:16.297 **** 2025-09-18 10:44:28.976600 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976605 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976611 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976616 | orchestrator | 2025-09-18 10:44:28.976622 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-09-18 10:44:28.976628 | orchestrator | Thursday 18 September 2025 10:44:22 +0000 (0:00:00.322) 0:06:16.619 **** 2025-09-18 10:44:28.976633 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976639 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976644 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976650 | orchestrator | 2025-09-18 10:44:28.976656 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-09-18 10:44:28.976661 | orchestrator | Thursday 18 September 2025 10:44:23 +0000 (0:00:00.627) 0:06:17.247 **** 2025-09-18 10:44:28.976670 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976676 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976681 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976687 | orchestrator | 2025-09-18 10:44:28.976693 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-09-18 10:44:28.976698 | orchestrator | Thursday 18 September 2025 10:44:23 +0000 (0:00:00.296) 0:06:17.543 **** 2025-09-18 10:44:28.976704 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976710 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976715 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976721 | orchestrator | 2025-09-18 10:44:28.976726 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-09-18 10:44:28.976732 | orchestrator | Thursday 18 September 2025 10:44:24 +0000 (0:00:00.388) 0:06:17.932 **** 2025-09-18 10:44:28.976738 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:44:28.976743 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:44:28.976749 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:44:28.976755 | orchestrator | 2025-09-18 10:44:28.976760 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-09-18 10:44:28.976766 | orchestrator | Thursday 18 September 2025 10:44:24 +0000 (0:00:00.306) 0:06:18.238 **** 2025-09-18 10:44:28.976772 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976778 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976783 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976789 | orchestrator | 2025-09-18 10:44:28.976794 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-09-18 10:44:28.976800 | orchestrator | Thursday 18 September 2025 10:44:25 +0000 (0:00:01.165) 0:06:19.403 **** 2025-09-18 10:44:28.976806 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:44:28.976812 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:44:28.976817 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:44:28.976823 | orchestrator | 2025-09-18 10:44:28.976828 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:44:28.976834 | orchestrator | testbed-node-0 : ok=123  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 10:44:28.976840 | orchestrator | testbed-node-1 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 10:44:28.976846 | orchestrator | testbed-node-2 : ok=122  changed=76  unreachable=0 failed=0 skipped=97  rescued=0 ignored=0 2025-09-18 10:44:28.976851 | orchestrator | 2025-09-18 10:44:28.976857 | orchestrator | 2025-09-18 10:44:28.976866 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:44:28.976872 | orchestrator | Thursday 18 September 2025 10:44:26 +0000 (0:00:00.766) 0:06:20.170 **** 2025-09-18 10:44:28.976877 | orchestrator | =============================================================================== 2025-09-18 10:44:28.976883 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 9.31s 2025-09-18 10:44:28.976889 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.44s 2025-09-18 10:44:28.976894 | orchestrator | loadbalancer : Start backup proxysql container -------------------------- 7.54s 2025-09-18 10:44:28.976900 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 6.35s 2025-09-18 10:44:28.976905 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 6.18s 2025-09-18 10:44:28.976919 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 5.69s 2025-09-18 10:44:28.976925 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.68s 2025-09-18 10:44:28.976930 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 4.52s 2025-09-18 10:44:28.976936 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 4.43s 2025-09-18 10:44:28.976946 | orchestrator | service-cert-copy : loadbalancer | Copying over extra CA certificates --- 4.32s 2025-09-18 10:44:28.976954 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 4.25s 2025-09-18 10:44:28.976960 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.18s 2025-09-18 10:44:28.976965 | orchestrator | haproxy-config : Copying over horizon haproxy config -------------------- 4.07s 2025-09-18 10:44:28.976971 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.01s 2025-09-18 10:44:28.976977 | orchestrator | loadbalancer : Copying over config.json files for services -------------- 3.96s 2025-09-18 10:44:28.976982 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 3.90s 2025-09-18 10:44:28.976988 | orchestrator | loadbalancer : Copying over proxysql config ----------------------------- 3.89s 2025-09-18 10:44:28.976993 | orchestrator | haproxy-config : Copying over manila haproxy config --------------------- 3.83s 2025-09-18 10:44:28.976999 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 3.75s 2025-09-18 10:44:28.977005 | orchestrator | loadbalancer : Copying over custom haproxy services configuration ------- 3.61s 2025-09-18 10:44:28.977010 | orchestrator | 2025-09-18 10:44:28 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:28.977016 | orchestrator | 2025-09-18 10:44:28 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:28.977021 | orchestrator | 2025-09-18 10:44:28 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:28.977027 | orchestrator | 2025-09-18 10:44:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:31.992115 | orchestrator | 2025-09-18 10:44:31 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:31.992240 | orchestrator | 2025-09-18 10:44:31 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:31.993012 | orchestrator | 2025-09-18 10:44:31 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:31.993080 | orchestrator | 2025-09-18 10:44:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:35.065983 | orchestrator | 2025-09-18 10:44:35 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:35.066531 | orchestrator | 2025-09-18 10:44:35 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:35.069792 | orchestrator | 2025-09-18 10:44:35 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:35.070146 | orchestrator | 2025-09-18 10:44:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:38.103588 | orchestrator | 2025-09-18 10:44:38 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:38.103696 | orchestrator | 2025-09-18 10:44:38 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:38.104460 | orchestrator | 2025-09-18 10:44:38 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:38.104485 | orchestrator | 2025-09-18 10:44:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:41.182561 | orchestrator | 2025-09-18 10:44:41 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:41.183164 | orchestrator | 2025-09-18 10:44:41 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:41.184130 | orchestrator | 2025-09-18 10:44:41 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:41.184155 | orchestrator | 2025-09-18 10:44:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:44.236414 | orchestrator | 2025-09-18 10:44:44 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:44.236514 | orchestrator | 2025-09-18 10:44:44 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:44.237627 | orchestrator | 2025-09-18 10:44:44 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:44.237743 | orchestrator | 2025-09-18 10:44:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:47.276811 | orchestrator | 2025-09-18 10:44:47 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:47.280240 | orchestrator | 2025-09-18 10:44:47 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:47.281102 | orchestrator | 2025-09-18 10:44:47 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:47.281120 | orchestrator | 2025-09-18 10:44:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:50.311796 | orchestrator | 2025-09-18 10:44:50 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:50.312132 | orchestrator | 2025-09-18 10:44:50 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:50.313749 | orchestrator | 2025-09-18 10:44:50 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:50.313881 | orchestrator | 2025-09-18 10:44:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:53.346005 | orchestrator | 2025-09-18 10:44:53 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:53.346302 | orchestrator | 2025-09-18 10:44:53 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:53.347224 | orchestrator | 2025-09-18 10:44:53 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:53.347312 | orchestrator | 2025-09-18 10:44:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:56.390258 | orchestrator | 2025-09-18 10:44:56 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:56.392322 | orchestrator | 2025-09-18 10:44:56 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:56.392910 | orchestrator | 2025-09-18 10:44:56 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:56.392935 | orchestrator | 2025-09-18 10:44:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:44:59.430690 | orchestrator | 2025-09-18 10:44:59 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:44:59.431673 | orchestrator | 2025-09-18 10:44:59 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:44:59.433209 | orchestrator | 2025-09-18 10:44:59 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:44:59.433237 | orchestrator | 2025-09-18 10:44:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:02.473674 | orchestrator | 2025-09-18 10:45:02 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:02.476166 | orchestrator | 2025-09-18 10:45:02 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:02.477643 | orchestrator | 2025-09-18 10:45:02 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:02.477674 | orchestrator | 2025-09-18 10:45:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:05.508651 | orchestrator | 2025-09-18 10:45:05 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:05.511145 | orchestrator | 2025-09-18 10:45:05 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:05.511966 | orchestrator | 2025-09-18 10:45:05 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:05.512090 | orchestrator | 2025-09-18 10:45:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:08.555421 | orchestrator | 2025-09-18 10:45:08 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:08.556063 | orchestrator | 2025-09-18 10:45:08 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:08.557086 | orchestrator | 2025-09-18 10:45:08 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:08.558100 | orchestrator | 2025-09-18 10:45:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:11.600514 | orchestrator | 2025-09-18 10:45:11 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:11.601561 | orchestrator | 2025-09-18 10:45:11 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:11.602677 | orchestrator | 2025-09-18 10:45:11 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:11.602707 | orchestrator | 2025-09-18 10:45:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:14.654293 | orchestrator | 2025-09-18 10:45:14 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:14.655686 | orchestrator | 2025-09-18 10:45:14 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:14.658459 | orchestrator | 2025-09-18 10:45:14 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:14.658498 | orchestrator | 2025-09-18 10:45:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:17.698685 | orchestrator | 2025-09-18 10:45:17 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:17.699388 | orchestrator | 2025-09-18 10:45:17 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:17.700555 | orchestrator | 2025-09-18 10:45:17 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:17.700588 | orchestrator | 2025-09-18 10:45:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:20.740962 | orchestrator | 2025-09-18 10:45:20 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:20.742435 | orchestrator | 2025-09-18 10:45:20 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:20.744524 | orchestrator | 2025-09-18 10:45:20 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:20.745112 | orchestrator | 2025-09-18 10:45:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:23.800718 | orchestrator | 2025-09-18 10:45:23 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:23.802177 | orchestrator | 2025-09-18 10:45:23 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:23.804196 | orchestrator | 2025-09-18 10:45:23 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:23.804221 | orchestrator | 2025-09-18 10:45:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:26.844228 | orchestrator | 2025-09-18 10:45:26 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:26.845525 | orchestrator | 2025-09-18 10:45:26 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:26.847696 | orchestrator | 2025-09-18 10:45:26 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:26.847725 | orchestrator | 2025-09-18 10:45:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:29.894894 | orchestrator | 2025-09-18 10:45:29 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:29.895024 | orchestrator | 2025-09-18 10:45:29 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:29.896505 | orchestrator | 2025-09-18 10:45:29 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:29.896536 | orchestrator | 2025-09-18 10:45:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:32.928183 | orchestrator | 2025-09-18 10:45:32 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:32.929727 | orchestrator | 2025-09-18 10:45:32 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:32.933086 | orchestrator | 2025-09-18 10:45:32 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:32.933235 | orchestrator | 2025-09-18 10:45:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:35.986113 | orchestrator | 2025-09-18 10:45:35 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:35.986430 | orchestrator | 2025-09-18 10:45:35 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:35.989075 | orchestrator | 2025-09-18 10:45:35 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:35.989128 | orchestrator | 2025-09-18 10:45:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:39.034131 | orchestrator | 2025-09-18 10:45:39 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:39.035238 | orchestrator | 2025-09-18 10:45:39 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:39.036806 | orchestrator | 2025-09-18 10:45:39 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:39.036902 | orchestrator | 2025-09-18 10:45:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:42.082200 | orchestrator | 2025-09-18 10:45:42 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:42.083630 | orchestrator | 2025-09-18 10:45:42 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:42.085668 | orchestrator | 2025-09-18 10:45:42 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:42.085830 | orchestrator | 2025-09-18 10:45:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:45.130861 | orchestrator | 2025-09-18 10:45:45 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:45.132557 | orchestrator | 2025-09-18 10:45:45 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:45.137319 | orchestrator | 2025-09-18 10:45:45 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:45.137335 | orchestrator | 2025-09-18 10:45:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:48.189848 | orchestrator | 2025-09-18 10:45:48 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:48.189947 | orchestrator | 2025-09-18 10:45:48 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:48.190485 | orchestrator | 2025-09-18 10:45:48 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:48.190511 | orchestrator | 2025-09-18 10:45:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:51.224443 | orchestrator | 2025-09-18 10:45:51 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:51.225509 | orchestrator | 2025-09-18 10:45:51 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:51.227181 | orchestrator | 2025-09-18 10:45:51 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:51.227410 | orchestrator | 2025-09-18 10:45:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:54.266982 | orchestrator | 2025-09-18 10:45:54 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:54.267082 | orchestrator | 2025-09-18 10:45:54 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:54.268114 | orchestrator | 2025-09-18 10:45:54 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:54.268229 | orchestrator | 2025-09-18 10:45:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:45:57.316448 | orchestrator | 2025-09-18 10:45:57 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:45:57.317398 | orchestrator | 2025-09-18 10:45:57 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:45:57.318992 | orchestrator | 2025-09-18 10:45:57 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:45:57.319020 | orchestrator | 2025-09-18 10:45:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:00.368133 | orchestrator | 2025-09-18 10:46:00 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:00.369124 | orchestrator | 2025-09-18 10:46:00 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:00.370565 | orchestrator | 2025-09-18 10:46:00 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:00.371036 | orchestrator | 2025-09-18 10:46:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:03.419476 | orchestrator | 2025-09-18 10:46:03 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:03.421265 | orchestrator | 2025-09-18 10:46:03 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:03.423677 | orchestrator | 2025-09-18 10:46:03 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:03.423820 | orchestrator | 2025-09-18 10:46:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:06.469658 | orchestrator | 2025-09-18 10:46:06 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:06.472143 | orchestrator | 2025-09-18 10:46:06 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:06.473737 | orchestrator | 2025-09-18 10:46:06 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:06.474081 | orchestrator | 2025-09-18 10:46:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:09.521331 | orchestrator | 2025-09-18 10:46:09 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:09.522635 | orchestrator | 2025-09-18 10:46:09 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:09.524280 | orchestrator | 2025-09-18 10:46:09 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:09.524386 | orchestrator | 2025-09-18 10:46:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:12.571972 | orchestrator | 2025-09-18 10:46:12 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:12.573006 | orchestrator | 2025-09-18 10:46:12 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:12.574988 | orchestrator | 2025-09-18 10:46:12 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:12.575116 | orchestrator | 2025-09-18 10:46:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:15.623376 | orchestrator | 2025-09-18 10:46:15 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:15.625556 | orchestrator | 2025-09-18 10:46:15 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:15.627365 | orchestrator | 2025-09-18 10:46:15 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:15.627618 | orchestrator | 2025-09-18 10:46:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:18.674471 | orchestrator | 2025-09-18 10:46:18 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:18.676730 | orchestrator | 2025-09-18 10:46:18 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:18.678882 | orchestrator | 2025-09-18 10:46:18 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:18.679389 | orchestrator | 2025-09-18 10:46:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:21.719961 | orchestrator | 2025-09-18 10:46:21 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:21.721329 | orchestrator | 2025-09-18 10:46:21 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:21.723171 | orchestrator | 2025-09-18 10:46:21 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:21.723304 | orchestrator | 2025-09-18 10:46:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:24.777637 | orchestrator | 2025-09-18 10:46:24 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:24.780263 | orchestrator | 2025-09-18 10:46:24 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:24.782138 | orchestrator | 2025-09-18 10:46:24 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:24.782402 | orchestrator | 2025-09-18 10:46:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:27.840643 | orchestrator | 2025-09-18 10:46:27 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:27.841626 | orchestrator | 2025-09-18 10:46:27 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:27.843106 | orchestrator | 2025-09-18 10:46:27 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:27.843246 | orchestrator | 2025-09-18 10:46:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:30.897010 | orchestrator | 2025-09-18 10:46:30 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:30.897541 | orchestrator | 2025-09-18 10:46:30 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:30.899634 | orchestrator | 2025-09-18 10:46:30 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:30.899869 | orchestrator | 2025-09-18 10:46:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:33.957963 | orchestrator | 2025-09-18 10:46:33 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:33.960385 | orchestrator | 2025-09-18 10:46:33 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:33.962197 | orchestrator | 2025-09-18 10:46:33 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:33.962221 | orchestrator | 2025-09-18 10:46:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:37.027227 | orchestrator | 2025-09-18 10:46:37 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:37.028358 | orchestrator | 2025-09-18 10:46:37 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:37.030314 | orchestrator | 2025-09-18 10:46:37 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:37.030387 | orchestrator | 2025-09-18 10:46:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:40.083354 | orchestrator | 2025-09-18 10:46:40 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:40.085756 | orchestrator | 2025-09-18 10:46:40 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:40.089079 | orchestrator | 2025-09-18 10:46:40 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state STARTED 2025-09-18 10:46:40.089901 | orchestrator | 2025-09-18 10:46:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:43.147236 | orchestrator | 2025-09-18 10:46:43 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:43.148835 | orchestrator | 2025-09-18 10:46:43 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:43.150542 | orchestrator | 2025-09-18 10:46:43 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:43.155477 | orchestrator | 2025-09-18 10:46:43 | INFO  | Task 7caf18e3-5254-4aaf-b894-bd1db8fa0629 is in state SUCCESS 2025-09-18 10:46:43.157480 | orchestrator | 2025-09-18 10:46:43.157513 | orchestrator | 2025-09-18 10:46:43.157525 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-09-18 10:46:43.157537 | orchestrator | 2025-09-18 10:46:43.157549 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-18 10:46:43.157560 | orchestrator | Thursday 18 September 2025 10:35:33 +0000 (0:00:00.789) 0:00:00.789 **** 2025-09-18 10:46:43.157573 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.157585 | orchestrator | 2025-09-18 10:46:43.157596 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-18 10:46:43.157737 | orchestrator | Thursday 18 September 2025 10:35:34 +0000 (0:00:01.087) 0:00:01.876 **** 2025-09-18 10:46:43.157755 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.157768 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.157779 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.157790 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.157801 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.157812 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.157823 | orchestrator | 2025-09-18 10:46:43.157879 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-18 10:46:43.157891 | orchestrator | Thursday 18 September 2025 10:35:36 +0000 (0:00:01.694) 0:00:03.571 **** 2025-09-18 10:46:43.157902 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.157913 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.157924 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.157935 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.157973 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.157985 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.157996 | orchestrator | 2025-09-18 10:46:43.158007 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-18 10:46:43.158063 | orchestrator | Thursday 18 September 2025 10:35:37 +0000 (0:00:00.711) 0:00:04.282 **** 2025-09-18 10:46:43.158078 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.158089 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.158100 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.158111 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.158122 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.158133 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.158143 | orchestrator | 2025-09-18 10:46:43.158154 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-18 10:46:43.158165 | orchestrator | Thursday 18 September 2025 10:35:38 +0000 (0:00:01.065) 0:00:05.348 **** 2025-09-18 10:46:43.158177 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.158188 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.158199 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.158210 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.158221 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.158232 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.158243 | orchestrator | 2025-09-18 10:46:43.158254 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-18 10:46:43.158265 | orchestrator | Thursday 18 September 2025 10:35:38 +0000 (0:00:00.699) 0:00:06.048 **** 2025-09-18 10:46:43.158275 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.158286 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.158297 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.158308 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.158318 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.158329 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.158340 | orchestrator | 2025-09-18 10:46:43.158351 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-18 10:46:43.158362 | orchestrator | Thursday 18 September 2025 10:35:39 +0000 (0:00:00.575) 0:00:06.623 **** 2025-09-18 10:46:43.158373 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.158383 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.158394 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.158405 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.158541 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.158555 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.158566 | orchestrator | 2025-09-18 10:46:43.158577 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-18 10:46:43.158588 | orchestrator | Thursday 18 September 2025 10:35:40 +0000 (0:00:00.836) 0:00:07.460 **** 2025-09-18 10:46:43.158599 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.158611 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.158623 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.158634 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.158667 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.158680 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.158691 | orchestrator | 2025-09-18 10:46:43.158702 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-18 10:46:43.158713 | orchestrator | Thursday 18 September 2025 10:35:41 +0000 (0:00:00.844) 0:00:08.304 **** 2025-09-18 10:46:43.158724 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.158735 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.158746 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.158757 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.158767 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.158778 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.158789 | orchestrator | 2025-09-18 10:46:43.158816 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-18 10:46:43.158828 | orchestrator | Thursday 18 September 2025 10:35:42 +0000 (0:00:00.926) 0:00:09.231 **** 2025-09-18 10:46:43.158849 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:46:43.158860 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.158871 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.159009 | orchestrator | 2025-09-18 10:46:43.159038 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-18 10:46:43.159050 | orchestrator | Thursday 18 September 2025 10:35:42 +0000 (0:00:00.691) 0:00:09.922 **** 2025-09-18 10:46:43.159060 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.159071 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.159082 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.159093 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.159104 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.159114 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.159125 | orchestrator | 2025-09-18 10:46:43.159150 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-18 10:46:43.159162 | orchestrator | Thursday 18 September 2025 10:35:44 +0000 (0:00:01.286) 0:00:11.208 **** 2025-09-18 10:46:43.159173 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:46:43.159184 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.159195 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.159205 | orchestrator | 2025-09-18 10:46:43.159216 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-18 10:46:43.159227 | orchestrator | Thursday 18 September 2025 10:35:47 +0000 (0:00:03.025) 0:00:14.233 **** 2025-09-18 10:46:43.159238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 10:46:43.159249 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 10:46:43.159260 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 10:46:43.159271 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.159281 | orchestrator | 2025-09-18 10:46:43.159292 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-18 10:46:43.159303 | orchestrator | Thursday 18 September 2025 10:35:47 +0000 (0:00:00.454) 0:00:14.688 **** 2025-09-18 10:46:43.159317 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159331 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159342 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159353 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.159364 | orchestrator | 2025-09-18 10:46:43.159375 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-18 10:46:43.159386 | orchestrator | Thursday 18 September 2025 10:35:48 +0000 (0:00:01.056) 0:00:15.745 **** 2025-09-18 10:46:43.159399 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159413 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159434 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159445 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.159456 | orchestrator | 2025-09-18 10:46:43.159467 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-18 10:46:43.159484 | orchestrator | Thursday 18 September 2025 10:35:48 +0000 (0:00:00.191) 0:00:15.936 **** 2025-09-18 10:46:43.159678 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-18 10:35:44.871935', 'end': '2025-09-18 10:35:45.127372', 'delta': '0:00:00.255437', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159694 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-18 10:35:45.780831', 'end': '2025-09-18 10:35:46.055278', 'delta': '0:00:00.274447', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159706 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-18 10:35:46.672483', 'end': '2025-09-18 10:35:46.945945', 'delta': '0:00:00.273462', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.159718 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.159729 | orchestrator | 2025-09-18 10:46:43.159740 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-18 10:46:43.159751 | orchestrator | Thursday 18 September 2025 10:35:49 +0000 (0:00:00.482) 0:00:16.418 **** 2025-09-18 10:46:43.159762 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.159773 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.159784 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.159795 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.159806 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.159825 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.159836 | orchestrator | 2025-09-18 10:46:43.159848 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-18 10:46:43.159858 | orchestrator | Thursday 18 September 2025 10:35:50 +0000 (0:00:01.453) 0:00:17.872 **** 2025-09-18 10:46:43.159869 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.159880 | orchestrator | 2025-09-18 10:46:43.159891 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-18 10:46:43.159902 | orchestrator | Thursday 18 September 2025 10:35:51 +0000 (0:00:01.081) 0:00:18.954 **** 2025-09-18 10:46:43.159912 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.159923 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.159934 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.159945 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.159956 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.159966 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.159977 | orchestrator | 2025-09-18 10:46:43.159988 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-18 10:46:43.159999 | orchestrator | Thursday 18 September 2025 10:35:53 +0000 (0:00:01.435) 0:00:20.389 **** 2025-09-18 10:46:43.160010 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160021 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160031 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160042 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160053 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160138 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160149 | orchestrator | 2025-09-18 10:46:43.160160 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 10:46:43.160171 | orchestrator | Thursday 18 September 2025 10:35:54 +0000 (0:00:01.324) 0:00:21.714 **** 2025-09-18 10:46:43.160182 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160193 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160203 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160214 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160225 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160235 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160246 | orchestrator | 2025-09-18 10:46:43.160262 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-18 10:46:43.160274 | orchestrator | Thursday 18 September 2025 10:35:55 +0000 (0:00:00.978) 0:00:22.692 **** 2025-09-18 10:46:43.160284 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160295 | orchestrator | 2025-09-18 10:46:43.160306 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-18 10:46:43.160317 | orchestrator | Thursday 18 September 2025 10:35:55 +0000 (0:00:00.362) 0:00:23.055 **** 2025-09-18 10:46:43.160328 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160339 | orchestrator | 2025-09-18 10:46:43.160350 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 10:46:43.160361 | orchestrator | Thursday 18 September 2025 10:35:56 +0000 (0:00:00.315) 0:00:23.370 **** 2025-09-18 10:46:43.160371 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160382 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160393 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160404 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160415 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160425 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160436 | orchestrator | 2025-09-18 10:46:43.160453 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-18 10:46:43.160465 | orchestrator | Thursday 18 September 2025 10:35:57 +0000 (0:00:00.857) 0:00:24.228 **** 2025-09-18 10:46:43.160476 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160486 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160497 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160516 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160527 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160758 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160774 | orchestrator | 2025-09-18 10:46:43.160785 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-18 10:46:43.160797 | orchestrator | Thursday 18 September 2025 10:35:58 +0000 (0:00:01.062) 0:00:25.290 **** 2025-09-18 10:46:43.160807 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160818 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160829 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160866 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160879 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160889 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160900 | orchestrator | 2025-09-18 10:46:43.160911 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-18 10:46:43.160922 | orchestrator | Thursday 18 September 2025 10:35:59 +0000 (0:00:00.891) 0:00:26.182 **** 2025-09-18 10:46:43.160933 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.160943 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.160954 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.160965 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.160976 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.160986 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.160997 | orchestrator | 2025-09-18 10:46:43.161008 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-18 10:46:43.161019 | orchestrator | Thursday 18 September 2025 10:36:00 +0000 (0:00:01.174) 0:00:27.356 **** 2025-09-18 10:46:43.161029 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.161040 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.161051 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.161061 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.161072 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.161083 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.161093 | orchestrator | 2025-09-18 10:46:43.161104 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-18 10:46:43.161115 | orchestrator | Thursday 18 September 2025 10:36:00 +0000 (0:00:00.595) 0:00:27.952 **** 2025-09-18 10:46:43.161126 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.161137 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.161147 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.161218 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.161231 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.161242 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.161252 | orchestrator | 2025-09-18 10:46:43.161263 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-18 10:46:43.161274 | orchestrator | Thursday 18 September 2025 10:36:01 +0000 (0:00:00.862) 0:00:28.815 **** 2025-09-18 10:46:43.161285 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.161296 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.161306 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.161317 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.161328 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.161338 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.161349 | orchestrator | 2025-09-18 10:46:43.161360 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-18 10:46:43.161371 | orchestrator | Thursday 18 September 2025 10:36:02 +0000 (0:00:00.721) 0:00:29.536 **** 2025-09-18 10:46:43.161383 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12', 'dm-uuid-LVM-Vaw5CJk2C3mO0tSxBixUJ0g2po36vShOlLaLHQgIe5no13lbKLZquyFXaJIAjng0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161411 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f', 'dm-uuid-LVM-hzyUORPsuwHDklNrX83rcpbROwYLAjcCmjBB4YT4C0vy2i12R6hzhkFvyVNjl7Ie'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161456 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161467 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161501 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161551 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161566 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M1FVlv-9D5q-lxd6-Riu1-KasI-Hcdt-5OI0oS', 'scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2', 'scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161580 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4X8uJ2-YZWG-qtzE-dAFO-At1b-uENL-AqM0bt', 'scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040', 'scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161599 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2', 'scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161617 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161636 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7', 'dm-uuid-LVM-yC7WLVYhTkV75h34D1fzIvnr47MYnyLcfJJ6y9smbo0iQM2OTQm1fH2b0thbo0ZN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e', 'dm-uuid-LVM-eWaIkBgFirJ2OipAcVXLq4k3mPWELeYtbQiUTn3TezFU30xUcRw7G8STGryTifyp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161726 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161786 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.161803 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161835 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.161847 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dcfXwQ-o4x6-2xYP-2XW1-PBCd-GWqM-v290he', 'scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70', 'scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.161883 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mPVnVN-zO8W-fS7j-zylO-aYxp-imbs-kPqbMC', 'scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940', 'scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.163548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4', 'scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.163587 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.163598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e', 'dm-uuid-LVM-0fKX4cfymsz5amMcqzBfiCgoZkhUeauNxj0GsySfSMS8VgLgqJt1MG0b7sDje6Kx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163608 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168', 'dm-uuid-LVM-tu4MSQ7U1BANsHFB4tWHe0vFmhIQTVwpi62cEXVQfQHrQvMXvt2TqyheRXw2ewup'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163630 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.163641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163679 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163696 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163707 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163727 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163747 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163783 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163792 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163815 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part1', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part14', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part15', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part16', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.163827 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.163837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163853 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163863 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163873 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163887 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163897 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163912 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163933 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163943 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163973 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.163994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part1', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part14', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part15', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part16', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164006 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164036 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164053 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mfSKw9-qlCs-k70n-J14h-tqMO-YAjr-iRt747', 'scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52', 'scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164064 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.164074 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.164084 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TTS66v-0VlV-Zuar-0Fk8-NhPj-NrRf-nhrckx', 'scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836', 'scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164100 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64', 'scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164110 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164121 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.164132 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164143 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164159 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164198 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:46:43.164253 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:46:43.164297 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.164313 | orchestrator | 2025-09-18 10:46:43.164330 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-18 10:46:43.164348 | orchestrator | Thursday 18 September 2025 10:36:04 +0000 (0:00:01.677) 0:00:31.214 **** 2025-09-18 10:46:43.164366 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7', 'dm-uuid-LVM-yC7WLVYhTkV75h34D1fzIvnr47MYnyLcfJJ6y9smbo0iQM2OTQm1fH2b0thbo0ZN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164394 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e', 'dm-uuid-LVM-eWaIkBgFirJ2OipAcVXLq4k3mPWELeYtbQiUTn3TezFU30xUcRw7G8STGryTifyp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164413 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164431 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164456 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164475 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164493 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164504 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164514 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164524 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164547 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164565 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dcfXwQ-o4x6-2xYP-2XW1-PBCd-GWqM-v290he', 'scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70', 'scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164577 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mPVnVN-zO8W-fS7j-zylO-aYxp-imbs-kPqbMC', 'scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940', 'scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164587 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4', 'scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164604 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e', 'dm-uuid-LVM-0fKX4cfymsz5amMcqzBfiCgoZkhUeauNxj0GsySfSMS8VgLgqJt1MG0b7sDje6Kx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164620 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164631 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168', 'dm-uuid-LVM-tu4MSQ7U1BANsHFB4tWHe0vFmhIQTVwpi62cEXVQfQHrQvMXvt2TqyheRXw2ewup'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164732 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164755 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164771 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12', 'dm-uuid-LVM-Vaw5CJk2C3mO0tSxBixUJ0g2po36vShOlLaLHQgIe5no13lbKLZquyFXaJIAjng0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164790 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f', 'dm-uuid-LVM-hzyUORPsuwHDklNrX83rcpbROwYLAjcCmjBB4YT4C0vy2i12R6hzhkFvyVNjl7Ie'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164818 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164828 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164838 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164852 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164868 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164885 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164896 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164906 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164916 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164926 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164941 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164962 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164972 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164983 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.164999 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165015 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.165031 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165042 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165052 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mfSKw9-qlCs-k70n-J14h-tqMO-YAjr-iRt747', 'scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52', 'scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165063 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165073 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165093 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TTS66v-0VlV-Zuar-0Fk8-NhPj-NrRf-nhrckx', 'scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836', 'scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165110 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165121 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165135 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64', 'scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165311 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12']2025-09-18 10:46:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:43.165329 | orchestrator | , 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M1FVlv-9D5q-lxd6-Riu1-KasI-Hcdt-5OI0oS', 'scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2', 'scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165340 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165350 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165361 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4X8uJ2-YZWG-qtzE-dAFO-At1b-uENL-AqM0bt', 'scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040', 'scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165389 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165417 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2', 'scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165434 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165451 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part1', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part14', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part15', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part16', 'scsi-SQEMU_QEMU_HARDDISK_66fc429d-b5e6-4c66-945d-f5e80dd7853a-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165485 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.165510 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165529 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-08-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165548 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165565 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.165583 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.165600 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165618 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165733 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165750 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165768 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165779 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165789 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165799 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165809 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165830 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165846 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165857 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part1', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part14', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part15', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part16', 'scsi-SQEMU_QEMU_HARDDISK_05b7d5c7-8ae0-478b-9c11-f5c3a25542ec-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165874 | orchestrator | skipping: [testbed-node-2] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-12-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165884 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.165896 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165909 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165918 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165927 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part1', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part14', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part15', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part16', 'scsi-SQEMU_QEMU_HARDDISK_4afe00d7-77e4-4bb6-991c-926b9ce2357f-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165944 | orchestrator | skipping: [testbed-node-1] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'inventory_hostname in groups.get(osd_group_name, [])', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-10-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:46:43.165952 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.165961 | orchestrator | 2025-09-18 10:46:43.165973 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-18 10:46:43.165981 | orchestrator | Thursday 18 September 2025 10:36:05 +0000 (0:00:01.084) 0:00:32.299 **** 2025-09-18 10:46:43.165989 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.165997 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.166005 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.166013 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.166063 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.166071 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.166079 | orchestrator | 2025-09-18 10:46:43.166087 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-18 10:46:43.166095 | orchestrator | Thursday 18 September 2025 10:36:06 +0000 (0:00:01.228) 0:00:33.527 **** 2025-09-18 10:46:43.166103 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.166111 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.166119 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.166126 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.166134 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.166142 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.166150 | orchestrator | 2025-09-18 10:46:43.166158 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 10:46:43.166166 | orchestrator | Thursday 18 September 2025 10:36:07 +0000 (0:00:00.580) 0:00:34.108 **** 2025-09-18 10:46:43.166174 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166182 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166190 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166198 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166205 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166213 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166221 | orchestrator | 2025-09-18 10:46:43.166229 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 10:46:43.166237 | orchestrator | Thursday 18 September 2025 10:36:08 +0000 (0:00:01.011) 0:00:35.120 **** 2025-09-18 10:46:43.166245 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166259 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166267 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166274 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166282 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166290 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166298 | orchestrator | 2025-09-18 10:46:43.166306 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 10:46:43.166314 | orchestrator | Thursday 18 September 2025 10:36:08 +0000 (0:00:00.815) 0:00:35.935 **** 2025-09-18 10:46:43.166321 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166329 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166337 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166345 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166353 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166360 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166368 | orchestrator | 2025-09-18 10:46:43.166376 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 10:46:43.166384 | orchestrator | Thursday 18 September 2025 10:36:09 +0000 (0:00:01.055) 0:00:36.991 **** 2025-09-18 10:46:43.166392 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166400 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166407 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166415 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166423 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166431 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166438 | orchestrator | 2025-09-18 10:46:43.166446 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-18 10:46:43.166454 | orchestrator | Thursday 18 September 2025 10:36:11 +0000 (0:00:01.221) 0:00:38.212 **** 2025-09-18 10:46:43.166462 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-18 10:46:43.166471 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-18 10:46:43.166478 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-18 10:46:43.166486 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-18 10:46:43.166494 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 10:46:43.166502 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-18 10:46:43.166510 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-18 10:46:43.166517 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-18 10:46:43.166525 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-09-18 10:46:43.166533 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-18 10:46:43.166541 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-09-18 10:46:43.166548 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-09-18 10:46:43.166556 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-18 10:46:43.166564 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-18 10:46:43.166572 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-09-18 10:46:43.166579 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-09-18 10:46:43.166591 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-18 10:46:43.166599 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-09-18 10:46:43.166607 | orchestrator | 2025-09-18 10:46:43.166615 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-18 10:46:43.166623 | orchestrator | Thursday 18 September 2025 10:36:15 +0000 (0:00:04.424) 0:00:42.637 **** 2025-09-18 10:46:43.166631 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 10:46:43.166639 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 10:46:43.166660 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 10:46:43.166668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 10:46:43.166676 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 10:46:43.166689 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 10:46:43.166697 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166705 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 10:46:43.166725 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 10:46:43.166747 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 10:46:43.166755 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:46:43.166771 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:46:43.166779 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:46:43.166787 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166795 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-09-18 10:46:43.166802 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-09-18 10:46:43.166810 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-09-18 10:46:43.166818 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166826 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166834 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-09-18 10:46:43.166842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-09-18 10:46:43.166849 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-09-18 10:46:43.166857 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166865 | orchestrator | 2025-09-18 10:46:43.166873 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-18 10:46:43.166881 | orchestrator | Thursday 18 September 2025 10:36:16 +0000 (0:00:01.128) 0:00:43.765 **** 2025-09-18 10:46:43.166889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.166897 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.166904 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.166912 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.166921 | orchestrator | 2025-09-18 10:46:43.166929 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 10:46:43.166937 | orchestrator | Thursday 18 September 2025 10:36:18 +0000 (0:00:01.549) 0:00:45.314 **** 2025-09-18 10:46:43.166945 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.166953 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.166961 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.166968 | orchestrator | 2025-09-18 10:46:43.166976 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 10:46:43.166984 | orchestrator | Thursday 18 September 2025 10:36:18 +0000 (0:00:00.494) 0:00:45.809 **** 2025-09-18 10:46:43.166992 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167000 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.167008 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.167016 | orchestrator | 2025-09-18 10:46:43.167023 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 10:46:43.167031 | orchestrator | Thursday 18 September 2025 10:36:19 +0000 (0:00:00.494) 0:00:46.303 **** 2025-09-18 10:46:43.167039 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167047 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.167055 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.167063 | orchestrator | 2025-09-18 10:46:43.167071 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 10:46:43.167078 | orchestrator | Thursday 18 September 2025 10:36:19 +0000 (0:00:00.444) 0:00:46.748 **** 2025-09-18 10:46:43.167086 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.167094 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.167102 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.167116 | orchestrator | 2025-09-18 10:46:43.167124 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 10:46:43.167132 | orchestrator | Thursday 18 September 2025 10:36:21 +0000 (0:00:01.874) 0:00:48.623 **** 2025-09-18 10:46:43.167140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.167148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.167156 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.167164 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167172 | orchestrator | 2025-09-18 10:46:43.167180 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 10:46:43.167187 | orchestrator | Thursday 18 September 2025 10:36:22 +0000 (0:00:00.764) 0:00:49.388 **** 2025-09-18 10:46:43.167196 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.167203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.167211 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.167219 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167227 | orchestrator | 2025-09-18 10:46:43.167235 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 10:46:43.167249 | orchestrator | Thursday 18 September 2025 10:36:22 +0000 (0:00:00.660) 0:00:50.049 **** 2025-09-18 10:46:43.167257 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.167265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.167273 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.167281 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167289 | orchestrator | 2025-09-18 10:46:43.167297 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 10:46:43.167305 | orchestrator | Thursday 18 September 2025 10:36:23 +0000 (0:00:00.428) 0:00:50.477 **** 2025-09-18 10:46:43.167313 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.167321 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.167328 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.167336 | orchestrator | 2025-09-18 10:46:43.167344 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 10:46:43.167352 | orchestrator | Thursday 18 September 2025 10:36:23 +0000 (0:00:00.350) 0:00:50.828 **** 2025-09-18 10:46:43.167360 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 10:46:43.167368 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 10:46:43.167381 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 10:46:43.167390 | orchestrator | 2025-09-18 10:46:43.167398 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-18 10:46:43.167406 | orchestrator | Thursday 18 September 2025 10:36:24 +0000 (0:00:01.108) 0:00:51.937 **** 2025-09-18 10:46:43.167414 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:46:43.167422 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.167430 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.167438 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 10:46:43.167445 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 10:46:43.167453 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 10:46:43.167461 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 10:46:43.167469 | orchestrator | 2025-09-18 10:46:43.167477 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-18 10:46:43.167485 | orchestrator | Thursday 18 September 2025 10:36:26 +0000 (0:00:01.478) 0:00:53.415 **** 2025-09-18 10:46:43.167492 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:46:43.167506 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.167513 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.167521 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 10:46:43.167529 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 10:46:43.167537 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 10:46:43.167545 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 10:46:43.167553 | orchestrator | 2025-09-18 10:46:43.167560 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.167568 | orchestrator | Thursday 18 September 2025 10:36:29 +0000 (0:00:03.221) 0:00:56.636 **** 2025-09-18 10:46:43.167577 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.167585 | orchestrator | 2025-09-18 10:46:43.167593 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.167601 | orchestrator | Thursday 18 September 2025 10:36:31 +0000 (0:00:01.699) 0:00:58.336 **** 2025-09-18 10:46:43.167609 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.167617 | orchestrator | 2025-09-18 10:46:43.167625 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.167632 | orchestrator | Thursday 18 September 2025 10:36:32 +0000 (0:00:01.070) 0:00:59.407 **** 2025-09-18 10:46:43.167640 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167662 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.167670 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.167678 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.167686 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.167694 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.167702 | orchestrator | 2025-09-18 10:46:43.167710 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.167718 | orchestrator | Thursday 18 September 2025 10:36:33 +0000 (0:00:01.399) 0:01:00.806 **** 2025-09-18 10:46:43.167726 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.167734 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.167742 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.167749 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.167757 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.167765 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.167773 | orchestrator | 2025-09-18 10:46:43.167781 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.167789 | orchestrator | Thursday 18 September 2025 10:36:34 +0000 (0:00:01.186) 0:01:01.993 **** 2025-09-18 10:46:43.167797 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.167805 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.167812 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.167820 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.167832 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.167840 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.167848 | orchestrator | 2025-09-18 10:46:43.167856 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.167864 | orchestrator | Thursday 18 September 2025 10:36:37 +0000 (0:00:02.746) 0:01:04.740 **** 2025-09-18 10:46:43.167872 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.167880 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.167888 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.167896 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.167903 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.167916 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.167924 | orchestrator | 2025-09-18 10:46:43.167932 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.167940 | orchestrator | Thursday 18 September 2025 10:36:38 +0000 (0:00:01.117) 0:01:05.858 **** 2025-09-18 10:46:43.167947 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.167956 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.167964 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.167971 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.167979 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.167991 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.167999 | orchestrator | 2025-09-18 10:46:43.168007 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.168015 | orchestrator | Thursday 18 September 2025 10:36:39 +0000 (0:00:01.142) 0:01:07.000 **** 2025-09-18 10:46:43.168023 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168031 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168039 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168046 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168054 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168062 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168070 | orchestrator | 2025-09-18 10:46:43.168078 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.168086 | orchestrator | Thursday 18 September 2025 10:36:40 +0000 (0:00:00.882) 0:01:07.882 **** 2025-09-18 10:46:43.168094 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168102 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168110 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168117 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168125 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168133 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168141 | orchestrator | 2025-09-18 10:46:43.168149 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.168156 | orchestrator | Thursday 18 September 2025 10:36:41 +0000 (0:00:00.631) 0:01:08.514 **** 2025-09-18 10:46:43.168164 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168172 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168180 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168188 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.168196 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.168203 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.168211 | orchestrator | 2025-09-18 10:46:43.168219 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.168227 | orchestrator | Thursday 18 September 2025 10:36:42 +0000 (0:00:01.437) 0:01:09.952 **** 2025-09-18 10:46:43.168235 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168242 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168250 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168258 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.168266 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.168273 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.168281 | orchestrator | 2025-09-18 10:46:43.168289 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.168297 | orchestrator | Thursday 18 September 2025 10:36:44 +0000 (0:00:01.187) 0:01:11.139 **** 2025-09-18 10:46:43.168305 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168312 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168320 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168328 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168336 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168343 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168351 | orchestrator | 2025-09-18 10:46:43.168359 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.168367 | orchestrator | Thursday 18 September 2025 10:36:45 +0000 (0:00:01.081) 0:01:12.221 **** 2025-09-18 10:46:43.168380 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168388 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168395 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168403 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.168411 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.168419 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.168427 | orchestrator | 2025-09-18 10:46:43.168435 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.168442 | orchestrator | Thursday 18 September 2025 10:36:45 +0000 (0:00:00.712) 0:01:12.933 **** 2025-09-18 10:46:43.168450 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168458 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168466 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168474 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168482 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168489 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168497 | orchestrator | 2025-09-18 10:46:43.168505 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.168513 | orchestrator | Thursday 18 September 2025 10:36:46 +0000 (0:00:01.056) 0:01:13.990 **** 2025-09-18 10:46:43.168521 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168529 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168537 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168544 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168552 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168560 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168568 | orchestrator | 2025-09-18 10:46:43.168576 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.168584 | orchestrator | Thursday 18 September 2025 10:36:47 +0000 (0:00:00.731) 0:01:14.721 **** 2025-09-18 10:46:43.168591 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168599 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168607 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168615 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168626 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168634 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168674 | orchestrator | 2025-09-18 10:46:43.168684 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.168692 | orchestrator | Thursday 18 September 2025 10:36:48 +0000 (0:00:00.924) 0:01:15.646 **** 2025-09-18 10:46:43.168700 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168708 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168715 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168723 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168731 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168739 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168747 | orchestrator | 2025-09-18 10:46:43.168755 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.168763 | orchestrator | Thursday 18 September 2025 10:36:49 +0000 (0:00:00.605) 0:01:16.251 **** 2025-09-18 10:46:43.168771 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168779 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168787 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168795 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.168807 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.168815 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.168823 | orchestrator | 2025-09-18 10:46:43.168831 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.168839 | orchestrator | Thursday 18 September 2025 10:36:50 +0000 (0:00:01.080) 0:01:17.332 **** 2025-09-18 10:46:43.168847 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.168855 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.168863 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.168876 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.168884 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.168892 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.168900 | orchestrator | 2025-09-18 10:46:43.168908 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.168916 | orchestrator | Thursday 18 September 2025 10:36:50 +0000 (0:00:00.694) 0:01:18.027 **** 2025-09-18 10:46:43.168924 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.168932 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.168939 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.168947 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.168955 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.168963 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.168971 | orchestrator | 2025-09-18 10:46:43.168979 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.168987 | orchestrator | Thursday 18 September 2025 10:36:52 +0000 (0:00:01.216) 0:01:19.243 **** 2025-09-18 10:46:43.168994 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.169002 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.169010 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.169018 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.169026 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.169033 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.169041 | orchestrator | 2025-09-18 10:46:43.169048 | orchestrator | TASK [ceph-container-common : Generate systemd ceph target file] *************** 2025-09-18 10:46:43.169055 | orchestrator | Thursday 18 September 2025 10:36:53 +0000 (0:00:01.414) 0:01:20.657 **** 2025-09-18 10:46:43.169062 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.169068 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.169075 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.169082 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.169088 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.169095 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.169102 | orchestrator | 2025-09-18 10:46:43.169108 | orchestrator | TASK [ceph-container-common : Enable ceph.target] ****************************** 2025-09-18 10:46:43.169115 | orchestrator | Thursday 18 September 2025 10:36:55 +0000 (0:00:02.062) 0:01:22.720 **** 2025-09-18 10:46:43.169122 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.169128 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.169135 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.169142 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.169148 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.169155 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.169162 | orchestrator | 2025-09-18 10:46:43.169168 | orchestrator | TASK [ceph-container-common : Include prerequisites.yml] *********************** 2025-09-18 10:46:43.169175 | orchestrator | Thursday 18 September 2025 10:36:58 +0000 (0:00:02.665) 0:01:25.386 **** 2025-09-18 10:46:43.169182 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.169189 | orchestrator | 2025-09-18 10:46:43.169196 | orchestrator | TASK [ceph-container-common : Stop lvmetad] ************************************ 2025-09-18 10:46:43.169202 | orchestrator | Thursday 18 September 2025 10:36:59 +0000 (0:00:01.226) 0:01:26.612 **** 2025-09-18 10:46:43.169209 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169215 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169222 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169229 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169235 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169242 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.169249 | orchestrator | 2025-09-18 10:46:43.169255 | orchestrator | TASK [ceph-container-common : Disable and mask lvmetad service] **************** 2025-09-18 10:46:43.169262 | orchestrator | Thursday 18 September 2025 10:37:00 +0000 (0:00:00.631) 0:01:27.244 **** 2025-09-18 10:46:43.169269 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169280 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169286 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169293 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169300 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169306 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.169313 | orchestrator | 2025-09-18 10:46:43.169320 | orchestrator | TASK [ceph-container-common : Remove ceph udev rules] ************************** 2025-09-18 10:46:43.169326 | orchestrator | Thursday 18 September 2025 10:37:01 +0000 (0:00:00.859) 0:01:28.104 **** 2025-09-18 10:46:43.169333 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169343 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169350 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169356 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169363 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169370 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-09-18 10:46:43.169376 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169383 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169390 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169396 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169406 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169413 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-09-18 10:46:43.169420 | orchestrator | 2025-09-18 10:46:43.169427 | orchestrator | TASK [ceph-container-common : Ensure tmpfiles.d is present] ******************** 2025-09-18 10:46:43.169434 | orchestrator | Thursday 18 September 2025 10:37:02 +0000 (0:00:01.543) 0:01:29.647 **** 2025-09-18 10:46:43.169440 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.169447 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.169453 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.169460 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.169467 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.169474 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.169480 | orchestrator | 2025-09-18 10:46:43.169487 | orchestrator | TASK [ceph-container-common : Restore certificates selinux context] ************ 2025-09-18 10:46:43.169494 | orchestrator | Thursday 18 September 2025 10:37:03 +0000 (0:00:01.242) 0:01:30.889 **** 2025-09-18 10:46:43.169500 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169507 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169514 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169520 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169527 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169533 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.169540 | orchestrator | 2025-09-18 10:46:43.169547 | orchestrator | TASK [ceph-container-common : Install python3 on osd nodes] ******************** 2025-09-18 10:46:43.169553 | orchestrator | Thursday 18 September 2025 10:37:04 +0000 (0:00:00.630) 0:01:31.520 **** 2025-09-18 10:46:43.169560 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169567 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169573 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169580 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169586 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169593 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.169600 | orchestrator | 2025-09-18 10:46:43.169606 | orchestrator | TASK [ceph-container-common : Include registry.yml] **************************** 2025-09-18 10:46:43.169617 | orchestrator | Thursday 18 September 2025 10:37:05 +0000 (0:00:00.877) 0:01:32.398 **** 2025-09-18 10:46:43.169624 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169631 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169637 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169660 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169667 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169674 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.169680 | orchestrator | 2025-09-18 10:46:43.169687 | orchestrator | TASK [ceph-container-common : Include fetch_image.yml] ************************* 2025-09-18 10:46:43.169694 | orchestrator | Thursday 18 September 2025 10:37:05 +0000 (0:00:00.619) 0:01:33.017 **** 2025-09-18 10:46:43.169701 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.169707 | orchestrator | 2025-09-18 10:46:43.169714 | orchestrator | TASK [ceph-container-common : Pulling Ceph container image] ******************** 2025-09-18 10:46:43.169721 | orchestrator | Thursday 18 September 2025 10:37:07 +0000 (0:00:01.301) 0:01:34.319 **** 2025-09-18 10:46:43.169727 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.169735 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.169746 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.169757 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.169768 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.169778 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.169788 | orchestrator | 2025-09-18 10:46:43.169799 | orchestrator | TASK [ceph-container-common : Pulling alertmanager/prometheus/grafana container images] *** 2025-09-18 10:46:43.169811 | orchestrator | Thursday 18 September 2025 10:37:52 +0000 (0:00:45.384) 0:02:19.704 **** 2025-09-18 10:46:43.169821 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169832 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.169841 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.169848 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.169854 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169861 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.169870 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.169881 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.169893 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169909 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.169917 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.169924 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.169930 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169937 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.169944 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.169950 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.169957 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169964 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.169970 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.169977 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.169988 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-09-18 10:46:43.169995 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-09-18 10:46:43.170007 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-09-18 10:46:43.170014 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170049 | orchestrator | 2025-09-18 10:46:43.170056 | orchestrator | TASK [ceph-container-common : Pulling node-exporter container image] *********** 2025-09-18 10:46:43.170062 | orchestrator | Thursday 18 September 2025 10:37:53 +0000 (0:00:00.652) 0:02:20.356 **** 2025-09-18 10:46:43.170069 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170076 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170083 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170090 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170096 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170103 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170109 | orchestrator | 2025-09-18 10:46:43.170116 | orchestrator | TASK [ceph-container-common : Export local ceph dev image] ********************* 2025-09-18 10:46:43.170123 | orchestrator | Thursday 18 September 2025 10:37:54 +0000 (0:00:00.843) 0:02:21.200 **** 2025-09-18 10:46:43.170129 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170136 | orchestrator | 2025-09-18 10:46:43.170143 | orchestrator | TASK [ceph-container-common : Copy ceph dev image file] ************************ 2025-09-18 10:46:43.170154 | orchestrator | Thursday 18 September 2025 10:37:54 +0000 (0:00:00.168) 0:02:21.369 **** 2025-09-18 10:46:43.170164 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170174 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170184 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170195 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170205 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170215 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170226 | orchestrator | 2025-09-18 10:46:43.170237 | orchestrator | TASK [ceph-container-common : Load ceph dev image] ***************************** 2025-09-18 10:46:43.170248 | orchestrator | Thursday 18 September 2025 10:37:54 +0000 (0:00:00.641) 0:02:22.011 **** 2025-09-18 10:46:43.170259 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170265 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170272 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170279 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170285 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170292 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170298 | orchestrator | 2025-09-18 10:46:43.170305 | orchestrator | TASK [ceph-container-common : Remove tmp ceph dev image file] ****************** 2025-09-18 10:46:43.170316 | orchestrator | Thursday 18 September 2025 10:37:55 +0000 (0:00:00.880) 0:02:22.891 **** 2025-09-18 10:46:43.170326 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170338 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170350 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170360 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170373 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170379 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170386 | orchestrator | 2025-09-18 10:46:43.170393 | orchestrator | TASK [ceph-container-common : Get ceph version] ******************************** 2025-09-18 10:46:43.170400 | orchestrator | Thursday 18 September 2025 10:37:56 +0000 (0:00:00.661) 0:02:23.553 **** 2025-09-18 10:46:43.170406 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.170413 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.170420 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.170426 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.170433 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.170440 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.170446 | orchestrator | 2025-09-18 10:46:43.170453 | orchestrator | TASK [ceph-container-common : Set_fact ceph_version ceph_version.stdout.split] *** 2025-09-18 10:46:43.170460 | orchestrator | Thursday 18 September 2025 10:37:58 +0000 (0:00:02.504) 0:02:26.057 **** 2025-09-18 10:46:43.170467 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.170473 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.170485 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.170492 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.170498 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.170505 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.170512 | orchestrator | 2025-09-18 10:46:43.170518 | orchestrator | TASK [ceph-container-common : Include release.yml] ***************************** 2025-09-18 10:46:43.170525 | orchestrator | Thursday 18 September 2025 10:37:59 +0000 (0:00:00.925) 0:02:26.983 **** 2025-09-18 10:46:43.170532 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.170539 | orchestrator | 2025-09-18 10:46:43.170546 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release jewel] ********************* 2025-09-18 10:46:43.170552 | orchestrator | Thursday 18 September 2025 10:38:01 +0000 (0:00:01.576) 0:02:28.560 **** 2025-09-18 10:46:43.170559 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170566 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170576 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170583 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170590 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170596 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170603 | orchestrator | 2025-09-18 10:46:43.170609 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release kraken] ******************** 2025-09-18 10:46:43.170616 | orchestrator | Thursday 18 September 2025 10:38:02 +0000 (0:00:00.792) 0:02:29.352 **** 2025-09-18 10:46:43.170622 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170629 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170636 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170682 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170690 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170697 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170704 | orchestrator | 2025-09-18 10:46:43.170710 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release luminous] ****************** 2025-09-18 10:46:43.170717 | orchestrator | Thursday 18 September 2025 10:38:03 +0000 (0:00:00.883) 0:02:30.236 **** 2025-09-18 10:46:43.170724 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170730 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170737 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170756 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170763 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170769 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170776 | orchestrator | 2025-09-18 10:46:43.170783 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release mimic] ********************* 2025-09-18 10:46:43.170790 | orchestrator | Thursday 18 September 2025 10:38:03 +0000 (0:00:00.636) 0:02:30.872 **** 2025-09-18 10:46:43.170796 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170803 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170810 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170816 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170823 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170829 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170836 | orchestrator | 2025-09-18 10:46:43.170843 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release nautilus] ****************** 2025-09-18 10:46:43.170849 | orchestrator | Thursday 18 September 2025 10:38:04 +0000 (0:00:00.804) 0:02:31.677 **** 2025-09-18 10:46:43.170856 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170863 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170869 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170876 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170882 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170889 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170895 | orchestrator | 2025-09-18 10:46:43.170903 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release octopus] ******************* 2025-09-18 10:46:43.170914 | orchestrator | Thursday 18 September 2025 10:38:05 +0000 (0:00:00.580) 0:02:32.257 **** 2025-09-18 10:46:43.170920 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170926 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170932 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170938 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.170945 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.170951 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.170957 | orchestrator | 2025-09-18 10:46:43.170963 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release pacific] ******************* 2025-09-18 10:46:43.170969 | orchestrator | Thursday 18 September 2025 10:38:05 +0000 (0:00:00.705) 0:02:32.963 **** 2025-09-18 10:46:43.170976 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.170982 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.170988 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.170994 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.171000 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.171006 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.171012 | orchestrator | 2025-09-18 10:46:43.171019 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release quincy] ******************** 2025-09-18 10:46:43.171025 | orchestrator | Thursday 18 September 2025 10:38:06 +0000 (0:00:00.610) 0:02:33.573 **** 2025-09-18 10:46:43.171031 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.171037 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.171043 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.171050 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.171056 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.171062 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.171068 | orchestrator | 2025-09-18 10:46:43.171074 | orchestrator | TASK [ceph-container-common : Set_fact ceph_release reef] ********************** 2025-09-18 10:46:43.171080 | orchestrator | Thursday 18 September 2025 10:38:07 +0000 (0:00:00.712) 0:02:34.286 **** 2025-09-18 10:46:43.171087 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.171093 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.171099 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.171105 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.171112 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.171118 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.171124 | orchestrator | 2025-09-18 10:46:43.171130 | orchestrator | TASK [ceph-config : Include create_ceph_initial_dirs.yml] ********************** 2025-09-18 10:46:43.171136 | orchestrator | Thursday 18 September 2025 10:38:08 +0000 (0:00:01.138) 0:02:35.424 **** 2025-09-18 10:46:43.171143 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.171149 | orchestrator | 2025-09-18 10:46:43.171155 | orchestrator | TASK [ceph-config : Create ceph initial directories] *************************** 2025-09-18 10:46:43.171161 | orchestrator | Thursday 18 September 2025 10:38:09 +0000 (0:00:01.056) 0:02:36.481 **** 2025-09-18 10:46:43.171168 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-09-18 10:46:43.171174 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-09-18 10:46:43.171180 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-09-18 10:46:43.171186 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171192 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171198 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-09-18 10:46:43.171211 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171217 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-09-18 10:46:43.171223 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-09-18 10:46:43.171229 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171236 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171246 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171252 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171259 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171265 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-09-18 10:46:43.171271 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171277 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171283 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171293 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171299 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171305 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-09-18 10:46:43.171312 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171318 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171324 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171330 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171336 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171342 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-09-18 10:46:43.171349 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171355 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171361 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171367 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171373 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171380 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171386 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171392 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-09-18 10:46:43.171398 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171404 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171410 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171417 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171423 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-09-18 10:46:43.171429 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171435 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171441 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171447 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171453 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171460 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171466 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171472 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/crash) 2025-09-18 10:46:43.171478 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171484 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171491 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171497 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171503 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171509 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-09-18 10:46:43.171522 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171528 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171534 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171541 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171547 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171553 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-09-18 10:46:43.171559 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171565 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171571 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171577 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171584 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171590 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171599 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-09-18 10:46:43.171605 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171612 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171618 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171624 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171630 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171637 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171655 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171661 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-09-18 10:46:43.171667 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-09-18 10:46:43.171677 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171684 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-09-18 10:46:43.171690 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171697 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-09-18 10:46:43.171703 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-09-18 10:46:43.171709 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-09-18 10:46:43.171715 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-09-18 10:46:43.171721 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171728 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171734 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-09-18 10:46:43.171740 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-09-18 10:46:43.171746 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171752 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171758 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-09-18 10:46:43.171764 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-09-18 10:46:43.171771 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-09-18 10:46:43.171777 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-09-18 10:46:43.171783 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-09-18 10:46:43.171794 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-09-18 10:46:43.171800 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-09-18 10:46:43.171806 | orchestrator | 2025-09-18 10:46:43.171812 | orchestrator | TASK [ceph-config : Include_tasks rgw_systemd_environment_file.yml] ************ 2025-09-18 10:46:43.171819 | orchestrator | Thursday 18 September 2025 10:38:16 +0000 (0:00:07.314) 0:02:43.796 **** 2025-09-18 10:46:43.171825 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.171831 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.171837 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.171844 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.171850 | orchestrator | 2025-09-18 10:46:43.171856 | orchestrator | TASK [ceph-config : Create rados gateway instance directories] ***************** 2025-09-18 10:46:43.171862 | orchestrator | Thursday 18 September 2025 10:38:17 +0000 (0:00:01.205) 0:02:45.001 **** 2025-09-18 10:46:43.171868 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171876 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171882 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171888 | orchestrator | 2025-09-18 10:46:43.171895 | orchestrator | TASK [ceph-config : Generate environment file] ********************************* 2025-09-18 10:46:43.171901 | orchestrator | Thursday 18 September 2025 10:38:18 +0000 (0:00:00.915) 0:02:45.917 **** 2025-09-18 10:46:43.171907 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171913 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171920 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.171926 | orchestrator | 2025-09-18 10:46:43.171932 | orchestrator | TASK [ceph-config : Reset num_osds] ******************************************** 2025-09-18 10:46:43.171938 | orchestrator | Thursday 18 September 2025 10:38:20 +0000 (0:00:01.458) 0:02:47.376 **** 2025-09-18 10:46:43.171944 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.171951 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.171957 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.171963 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.171969 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.171976 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.171982 | orchestrator | 2025-09-18 10:46:43.171991 | orchestrator | TASK [ceph-config : Count number of osds for lvm scenario] ********************* 2025-09-18 10:46:43.171998 | orchestrator | Thursday 18 September 2025 10:38:20 +0000 (0:00:00.527) 0:02:47.903 **** 2025-09-18 10:46:43.172004 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.172010 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.172016 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.172022 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172029 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172035 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172041 | orchestrator | 2025-09-18 10:46:43.172047 | orchestrator | TASK [ceph-config : Look up for ceph-volume rejected devices] ****************** 2025-09-18 10:46:43.172053 | orchestrator | Thursday 18 September 2025 10:38:21 +0000 (0:00:00.840) 0:02:48.743 **** 2025-09-18 10:46:43.172059 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172066 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172072 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172078 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172088 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172094 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172101 | orchestrator | 2025-09-18 10:46:43.172110 | orchestrator | TASK [ceph-config : Set_fact rejected_devices] ********************************* 2025-09-18 10:46:43.172117 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:00.660) 0:02:49.403 **** 2025-09-18 10:46:43.172123 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172129 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172135 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172141 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172147 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172154 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172160 | orchestrator | 2025-09-18 10:46:43.172166 | orchestrator | TASK [ceph-config : Set_fact _devices] ***************************************** 2025-09-18 10:46:43.172172 | orchestrator | Thursday 18 September 2025 10:38:22 +0000 (0:00:00.671) 0:02:50.075 **** 2025-09-18 10:46:43.172179 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172185 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172191 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172197 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172203 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172209 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172215 | orchestrator | 2025-09-18 10:46:43.172222 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-09-18 10:46:43.172228 | orchestrator | Thursday 18 September 2025 10:38:23 +0000 (0:00:00.702) 0:02:50.778 **** 2025-09-18 10:46:43.172234 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172241 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172247 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172253 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172259 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172265 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172271 | orchestrator | 2025-09-18 10:46:43.172277 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-09-18 10:46:43.172284 | orchestrator | Thursday 18 September 2025 10:38:24 +0000 (0:00:00.711) 0:02:51.489 **** 2025-09-18 10:46:43.172290 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172296 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172302 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172308 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172315 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172321 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172327 | orchestrator | 2025-09-18 10:46:43.172333 | orchestrator | TASK [ceph-config : Set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-09-18 10:46:43.172340 | orchestrator | Thursday 18 September 2025 10:38:25 +0000 (0:00:00.761) 0:02:52.251 **** 2025-09-18 10:46:43.172346 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172352 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172358 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172364 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172371 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172377 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172383 | orchestrator | 2025-09-18 10:46:43.172389 | orchestrator | TASK [ceph-config : Run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-09-18 10:46:43.172395 | orchestrator | Thursday 18 September 2025 10:38:25 +0000 (0:00:00.791) 0:02:53.042 **** 2025-09-18 10:46:43.172401 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172408 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172414 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172420 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.172426 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.172436 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.172443 | orchestrator | 2025-09-18 10:46:43.172449 | orchestrator | TASK [ceph-config : Set_fact num_osds (add existing osds)] ********************* 2025-09-18 10:46:43.172455 | orchestrator | Thursday 18 September 2025 10:38:29 +0000 (0:00:03.266) 0:02:56.309 **** 2025-09-18 10:46:43.172461 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.172467 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.172473 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.172480 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172486 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172492 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172498 | orchestrator | 2025-09-18 10:46:43.172504 | orchestrator | TASK [ceph-config : Set_fact _osd_memory_target] ******************************* 2025-09-18 10:46:43.172511 | orchestrator | Thursday 18 September 2025 10:38:30 +0000 (0:00:00.779) 0:02:57.088 **** 2025-09-18 10:46:43.172517 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.172523 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172529 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.172535 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.172542 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172548 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172554 | orchestrator | 2025-09-18 10:46:43.172560 | orchestrator | TASK [ceph-config : Set osd_memory_target to cluster host config] ************** 2025-09-18 10:46:43.172566 | orchestrator | Thursday 18 September 2025 10:38:31 +0000 (0:00:01.409) 0:02:58.497 **** 2025-09-18 10:46:43.172572 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172582 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172588 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172594 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172600 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172607 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172613 | orchestrator | 2025-09-18 10:46:43.172619 | orchestrator | TASK [ceph-config : Render rgw configs] **************************************** 2025-09-18 10:46:43.172625 | orchestrator | Thursday 18 September 2025 10:38:32 +0000 (0:00:00.667) 0:02:59.165 **** 2025-09-18 10:46:43.172631 | orchestrator | ok: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.172638 | orchestrator | ok: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.172653 | orchestrator | ok: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.172659 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172669 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172676 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172682 | orchestrator | 2025-09-18 10:46:43.172688 | orchestrator | TASK [ceph-config : Set config to cluster] ************************************* 2025-09-18 10:46:43.172694 | orchestrator | Thursday 18 September 2025 10:38:32 +0000 (0:00:00.803) 0:02:59.968 **** 2025-09-18 10:46:43.172702 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log'}])  2025-09-18 10:46:43.172709 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'client.rgw.default.testbed-node-3.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-3.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.13:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.13:8081'}])  2025-09-18 10:46:43.172717 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172723 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log'}])  2025-09-18 10:46:43.172734 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'client.rgw.default.testbed-node-4.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-4.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.14:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.14:8081'}])  2025-09-18 10:46:43.172741 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'log_file', 'value': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log'}])  2025-09-18 10:46:43.172747 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'client.rgw.default.testbed-node-5.rgw0', 'value': {'log_file': '/var/log/ceph/ceph-rgw-default-testbed-node-5.rgw0.log', 'rgw_frontends': 'beast endpoint=192.168.16.15:8081'}}, {'key': 'rgw_frontends', 'value': 'beast endpoint=192.168.16.15:8081'}])  2025-09-18 10:46:43.172754 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172760 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172766 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172772 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172778 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172784 | orchestrator | 2025-09-18 10:46:43.172791 | orchestrator | TASK [ceph-config : Set rgw configs to file] *********************************** 2025-09-18 10:46:43.172797 | orchestrator | Thursday 18 September 2025 10:38:33 +0000 (0:00:00.537) 0:03:00.505 **** 2025-09-18 10:46:43.172803 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172809 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172816 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172822 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172828 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172834 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172840 | orchestrator | 2025-09-18 10:46:43.172846 | orchestrator | TASK [ceph-config : Create ceph conf directory] ******************************** 2025-09-18 10:46:43.172853 | orchestrator | Thursday 18 September 2025 10:38:34 +0000 (0:00:00.776) 0:03:01.282 **** 2025-09-18 10:46:43.172859 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172865 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172871 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172877 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172883 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172889 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172895 | orchestrator | 2025-09-18 10:46:43.172902 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 10:46:43.172911 | orchestrator | Thursday 18 September 2025 10:38:34 +0000 (0:00:00.491) 0:03:01.773 **** 2025-09-18 10:46:43.172918 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172924 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172930 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172936 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172942 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.172948 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.172955 | orchestrator | 2025-09-18 10:46:43.172961 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 10:46:43.172967 | orchestrator | Thursday 18 September 2025 10:38:35 +0000 (0:00:01.009) 0:03:02.783 **** 2025-09-18 10:46:43.172973 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.172980 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.172986 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.172992 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.172998 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173008 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173014 | orchestrator | 2025-09-18 10:46:43.173021 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 10:46:43.173031 | orchestrator | Thursday 18 September 2025 10:38:36 +0000 (0:00:00.739) 0:03:03.522 **** 2025-09-18 10:46:43.173037 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173043 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.173049 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.173055 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173062 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173068 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173074 | orchestrator | 2025-09-18 10:46:43.173080 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 10:46:43.173086 | orchestrator | Thursday 18 September 2025 10:38:37 +0000 (0:00:00.919) 0:03:04.442 **** 2025-09-18 10:46:43.173092 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.173099 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173105 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.173111 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.173117 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173123 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173129 | orchestrator | 2025-09-18 10:46:43.173136 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 10:46:43.173142 | orchestrator | Thursday 18 September 2025 10:38:38 +0000 (0:00:00.894) 0:03:05.336 **** 2025-09-18 10:46:43.173148 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.173154 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.173160 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.173166 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173172 | orchestrator | 2025-09-18 10:46:43.173178 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 10:46:43.173185 | orchestrator | Thursday 18 September 2025 10:38:38 +0000 (0:00:00.556) 0:03:05.894 **** 2025-09-18 10:46:43.173191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.173197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.173203 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.173209 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173215 | orchestrator | 2025-09-18 10:46:43.173222 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 10:46:43.173228 | orchestrator | Thursday 18 September 2025 10:38:39 +0000 (0:00:00.592) 0:03:06.487 **** 2025-09-18 10:46:43.173234 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.173240 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.173246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.173253 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173259 | orchestrator | 2025-09-18 10:46:43.173265 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 10:46:43.173271 | orchestrator | Thursday 18 September 2025 10:38:40 +0000 (0:00:00.650) 0:03:07.137 **** 2025-09-18 10:46:43.173278 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.173284 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.173290 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.173296 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173302 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173308 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173314 | orchestrator | 2025-09-18 10:46:43.173321 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 10:46:43.173327 | orchestrator | Thursday 18 September 2025 10:38:40 +0000 (0:00:00.577) 0:03:07.715 **** 2025-09-18 10:46:43.173333 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 10:46:43.173343 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-09-18 10:46:43.173349 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-09-18 10:46:43.173356 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 10:46:43.173362 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 10:46:43.173368 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173374 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173380 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-09-18 10:46:43.173386 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173393 | orchestrator | 2025-09-18 10:46:43.173399 | orchestrator | TASK [ceph-config : Generate Ceph file] **************************************** 2025-09-18 10:46:43.173405 | orchestrator | Thursday 18 September 2025 10:38:42 +0000 (0:00:02.135) 0:03:09.850 **** 2025-09-18 10:46:43.173411 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.173417 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.173423 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.173429 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.173436 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.173442 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.173448 | orchestrator | 2025-09-18 10:46:43.173454 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.173460 | orchestrator | Thursday 18 September 2025 10:38:45 +0000 (0:00:02.919) 0:03:12.769 **** 2025-09-18 10:46:43.173469 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.173475 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.173481 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.173488 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.173494 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.173500 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.173506 | orchestrator | 2025-09-18 10:46:43.173512 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-18 10:46:43.173518 | orchestrator | Thursday 18 September 2025 10:38:47 +0000 (0:00:01.679) 0:03:14.449 **** 2025-09-18 10:46:43.173525 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173531 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.173537 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.173543 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.173550 | orchestrator | 2025-09-18 10:46:43.173556 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-18 10:46:43.173562 | orchestrator | Thursday 18 September 2025 10:38:48 +0000 (0:00:00.972) 0:03:15.422 **** 2025-09-18 10:46:43.173572 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.173578 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.173584 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.173591 | orchestrator | 2025-09-18 10:46:43.173597 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-18 10:46:43.173603 | orchestrator | Thursday 18 September 2025 10:38:48 +0000 (0:00:00.305) 0:03:15.727 **** 2025-09-18 10:46:43.173609 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.173615 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.173621 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.173628 | orchestrator | 2025-09-18 10:46:43.173634 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-18 10:46:43.173640 | orchestrator | Thursday 18 September 2025 10:38:50 +0000 (0:00:01.475) 0:03:17.203 **** 2025-09-18 10:46:43.173676 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:46:43.173683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:46:43.173689 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:46:43.173696 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173702 | orchestrator | 2025-09-18 10:46:43.173708 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-18 10:46:43.173718 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:01.144) 0:03:18.347 **** 2025-09-18 10:46:43.173725 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.173731 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.173737 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.173743 | orchestrator | 2025-09-18 10:46:43.173750 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-18 10:46:43.173756 | orchestrator | Thursday 18 September 2025 10:38:51 +0000 (0:00:00.338) 0:03:18.686 **** 2025-09-18 10:46:43.173762 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.173768 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.173774 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.173781 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.173787 | orchestrator | 2025-09-18 10:46:43.173793 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-18 10:46:43.173799 | orchestrator | Thursday 18 September 2025 10:38:52 +0000 (0:00:01.063) 0:03:19.750 **** 2025-09-18 10:46:43.173805 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.173812 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.173818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.173824 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173830 | orchestrator | 2025-09-18 10:46:43.173836 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-18 10:46:43.173843 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.372) 0:03:20.122 **** 2025-09-18 10:46:43.173849 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173855 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.173861 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.173868 | orchestrator | 2025-09-18 10:46:43.173874 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-18 10:46:43.173880 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.573) 0:03:20.696 **** 2025-09-18 10:46:43.173886 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173893 | orchestrator | 2025-09-18 10:46:43.173899 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-18 10:46:43.173905 | orchestrator | Thursday 18 September 2025 10:38:53 +0000 (0:00:00.210) 0:03:20.907 **** 2025-09-18 10:46:43.173911 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173917 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.173924 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.173930 | orchestrator | 2025-09-18 10:46:43.173936 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-18 10:46:43.173942 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.330) 0:03:21.238 **** 2025-09-18 10:46:43.173948 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173954 | orchestrator | 2025-09-18 10:46:43.173961 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-18 10:46:43.173967 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.207) 0:03:21.445 **** 2025-09-18 10:46:43.173973 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.173979 | orchestrator | 2025-09-18 10:46:43.173985 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-18 10:46:43.173992 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.200) 0:03:21.646 **** 2025-09-18 10:46:43.173998 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174004 | orchestrator | 2025-09-18 10:46:43.174010 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-18 10:46:43.174040 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.095) 0:03:21.742 **** 2025-09-18 10:46:43.174049 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174055 | orchestrator | 2025-09-18 10:46:43.174061 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-18 10:46:43.174072 | orchestrator | Thursday 18 September 2025 10:38:54 +0000 (0:00:00.220) 0:03:21.963 **** 2025-09-18 10:46:43.174078 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174084 | orchestrator | 2025-09-18 10:46:43.174090 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-18 10:46:43.174097 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.204) 0:03:22.167 **** 2025-09-18 10:46:43.174103 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.174109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.174115 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.174121 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174128 | orchestrator | 2025-09-18 10:46:43.174134 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-18 10:46:43.174150 | orchestrator | Thursday 18 September 2025 10:38:55 +0000 (0:00:00.557) 0:03:22.724 **** 2025-09-18 10:46:43.174157 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174163 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.174169 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.174175 | orchestrator | 2025-09-18 10:46:43.174182 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-18 10:46:43.174188 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.616) 0:03:23.340 **** 2025-09-18 10:46:43.174194 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174200 | orchestrator | 2025-09-18 10:46:43.174205 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-18 10:46:43.174211 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.300) 0:03:23.641 **** 2025-09-18 10:46:43.174216 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174222 | orchestrator | 2025-09-18 10:46:43.174227 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-18 10:46:43.174232 | orchestrator | Thursday 18 September 2025 10:38:56 +0000 (0:00:00.357) 0:03:23.999 **** 2025-09-18 10:46:43.174238 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174243 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.174249 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.174254 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.174260 | orchestrator | 2025-09-18 10:46:43.174265 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-18 10:46:43.174271 | orchestrator | Thursday 18 September 2025 10:38:57 +0000 (0:00:00.948) 0:03:24.947 **** 2025-09-18 10:46:43.174276 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.174281 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.174287 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.174292 | orchestrator | 2025-09-18 10:46:43.174297 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-18 10:46:43.174303 | orchestrator | Thursday 18 September 2025 10:38:58 +0000 (0:00:00.685) 0:03:25.633 **** 2025-09-18 10:46:43.174308 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.174313 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.174319 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.174324 | orchestrator | 2025-09-18 10:46:43.174330 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-18 10:46:43.174335 | orchestrator | Thursday 18 September 2025 10:38:59 +0000 (0:00:01.247) 0:03:26.880 **** 2025-09-18 10:46:43.174340 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.174346 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.174351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.174357 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174362 | orchestrator | 2025-09-18 10:46:43.174367 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-18 10:46:43.174376 | orchestrator | Thursday 18 September 2025 10:39:00 +0000 (0:00:00.661) 0:03:27.542 **** 2025-09-18 10:46:43.174382 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.174387 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.174393 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.174398 | orchestrator | 2025-09-18 10:46:43.174404 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-18 10:46:43.174409 | orchestrator | Thursday 18 September 2025 10:39:00 +0000 (0:00:00.509) 0:03:28.052 **** 2025-09-18 10:46:43.174414 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174420 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.174425 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.174431 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.174436 | orchestrator | 2025-09-18 10:46:43.174442 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-18 10:46:43.174447 | orchestrator | Thursday 18 September 2025 10:39:02 +0000 (0:00:01.200) 0:03:29.252 **** 2025-09-18 10:46:43.174452 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.174458 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.174463 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.174468 | orchestrator | 2025-09-18 10:46:43.174474 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-18 10:46:43.174479 | orchestrator | Thursday 18 September 2025 10:39:02 +0000 (0:00:00.295) 0:03:29.548 **** 2025-09-18 10:46:43.174485 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.174490 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.174495 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.174501 | orchestrator | 2025-09-18 10:46:43.174506 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-18 10:46:43.174512 | orchestrator | Thursday 18 September 2025 10:39:03 +0000 (0:00:01.464) 0:03:31.012 **** 2025-09-18 10:46:43.174517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.174526 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.174531 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.174536 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174542 | orchestrator | 2025-09-18 10:46:43.174547 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-18 10:46:43.174553 | orchestrator | Thursday 18 September 2025 10:39:04 +0000 (0:00:00.547) 0:03:31.560 **** 2025-09-18 10:46:43.174558 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.174563 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.174569 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.174574 | orchestrator | 2025-09-18 10:46:43.174580 | orchestrator | RUNNING HANDLER [ceph-handler : Rbdmirrors handler] **************************** 2025-09-18 10:46:43.174585 | orchestrator | Thursday 18 September 2025 10:39:04 +0000 (0:00:00.300) 0:03:31.860 **** 2025-09-18 10:46:43.174591 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174596 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.174602 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.174607 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174612 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.174621 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.174627 | orchestrator | 2025-09-18 10:46:43.174632 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-18 10:46:43.174638 | orchestrator | Thursday 18 September 2025 10:39:05 +0000 (0:00:00.807) 0:03:32.668 **** 2025-09-18 10:46:43.174655 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.174661 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.174667 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.174672 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.174681 | orchestrator | 2025-09-18 10:46:43.174687 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-18 10:46:43.174692 | orchestrator | Thursday 18 September 2025 10:39:06 +0000 (0:00:01.081) 0:03:33.749 **** 2025-09-18 10:46:43.174698 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.174703 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.174708 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.174714 | orchestrator | 2025-09-18 10:46:43.174719 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-18 10:46:43.174725 | orchestrator | Thursday 18 September 2025 10:39:06 +0000 (0:00:00.262) 0:03:34.011 **** 2025-09-18 10:46:43.174730 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.174736 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.174741 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.174747 | orchestrator | 2025-09-18 10:46:43.174752 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-18 10:46:43.174757 | orchestrator | Thursday 18 September 2025 10:39:08 +0000 (0:00:01.516) 0:03:35.528 **** 2025-09-18 10:46:43.174763 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:46:43.174768 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:46:43.174774 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:46:43.174779 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174785 | orchestrator | 2025-09-18 10:46:43.174790 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-18 10:46:43.174796 | orchestrator | Thursday 18 September 2025 10:39:09 +0000 (0:00:00.638) 0:03:36.167 **** 2025-09-18 10:46:43.174801 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.174806 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.174812 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.174817 | orchestrator | 2025-09-18 10:46:43.174823 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-09-18 10:46:43.174828 | orchestrator | 2025-09-18 10:46:43.174834 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.174839 | orchestrator | Thursday 18 September 2025 10:39:09 +0000 (0:00:00.792) 0:03:36.959 **** 2025-09-18 10:46:43.174845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.174850 | orchestrator | 2025-09-18 10:46:43.174855 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.174861 | orchestrator | Thursday 18 September 2025 10:39:10 +0000 (0:00:00.841) 0:03:37.801 **** 2025-09-18 10:46:43.174866 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.174872 | orchestrator | 2025-09-18 10:46:43.174877 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.174883 | orchestrator | Thursday 18 September 2025 10:39:11 +0000 (0:00:00.685) 0:03:38.486 **** 2025-09-18 10:46:43.174888 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.174894 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.174899 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.174904 | orchestrator | 2025-09-18 10:46:43.174910 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.174915 | orchestrator | Thursday 18 September 2025 10:39:12 +0000 (0:00:01.182) 0:03:39.669 **** 2025-09-18 10:46:43.174920 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174926 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.174931 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.174937 | orchestrator | 2025-09-18 10:46:43.174942 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.174948 | orchestrator | Thursday 18 September 2025 10:39:13 +0000 (0:00:00.497) 0:03:40.166 **** 2025-09-18 10:46:43.174953 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174962 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.174968 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.174973 | orchestrator | 2025-09-18 10:46:43.174978 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.174984 | orchestrator | Thursday 18 September 2025 10:39:13 +0000 (0:00:00.637) 0:03:40.804 **** 2025-09-18 10:46:43.174989 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.174998 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175004 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175009 | orchestrator | 2025-09-18 10:46:43.175014 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.175020 | orchestrator | Thursday 18 September 2025 10:39:14 +0000 (0:00:00.683) 0:03:41.487 **** 2025-09-18 10:46:43.175025 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175031 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175036 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175041 | orchestrator | 2025-09-18 10:46:43.175047 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.175052 | orchestrator | Thursday 18 September 2025 10:39:15 +0000 (0:00:00.991) 0:03:42.479 **** 2025-09-18 10:46:43.175058 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175063 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175069 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175074 | orchestrator | 2025-09-18 10:46:43.175079 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.175085 | orchestrator | Thursday 18 September 2025 10:39:15 +0000 (0:00:00.303) 0:03:42.783 **** 2025-09-18 10:46:43.175093 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175099 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175105 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175110 | orchestrator | 2025-09-18 10:46:43.175116 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.175121 | orchestrator | Thursday 18 September 2025 10:39:16 +0000 (0:00:00.559) 0:03:43.342 **** 2025-09-18 10:46:43.175126 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175132 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175137 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175143 | orchestrator | 2025-09-18 10:46:43.175148 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.175153 | orchestrator | Thursday 18 September 2025 10:39:17 +0000 (0:00:01.094) 0:03:44.437 **** 2025-09-18 10:46:43.175159 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175164 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175170 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175175 | orchestrator | 2025-09-18 10:46:43.175180 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.175186 | orchestrator | Thursday 18 September 2025 10:39:18 +0000 (0:00:00.652) 0:03:45.090 **** 2025-09-18 10:46:43.175191 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175196 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175202 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175207 | orchestrator | 2025-09-18 10:46:43.175213 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.175218 | orchestrator | Thursday 18 September 2025 10:39:18 +0000 (0:00:00.245) 0:03:45.335 **** 2025-09-18 10:46:43.175223 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175229 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175234 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175240 | orchestrator | 2025-09-18 10:46:43.175245 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.175251 | orchestrator | Thursday 18 September 2025 10:39:18 +0000 (0:00:00.614) 0:03:45.949 **** 2025-09-18 10:46:43.175256 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175261 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175267 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175277 | orchestrator | 2025-09-18 10:46:43.175282 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.175288 | orchestrator | Thursday 18 September 2025 10:39:19 +0000 (0:00:00.313) 0:03:46.262 **** 2025-09-18 10:46:43.175293 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175299 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175304 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175309 | orchestrator | 2025-09-18 10:46:43.175315 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.175320 | orchestrator | Thursday 18 September 2025 10:39:19 +0000 (0:00:00.349) 0:03:46.611 **** 2025-09-18 10:46:43.175326 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175331 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175336 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175342 | orchestrator | 2025-09-18 10:46:43.175347 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.175353 | orchestrator | Thursday 18 September 2025 10:39:19 +0000 (0:00:00.284) 0:03:46.896 **** 2025-09-18 10:46:43.175358 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175364 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175369 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175374 | orchestrator | 2025-09-18 10:46:43.175380 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.175385 | orchestrator | Thursday 18 September 2025 10:39:20 +0000 (0:00:00.563) 0:03:47.459 **** 2025-09-18 10:46:43.175390 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.175396 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.175401 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175407 | orchestrator | 2025-09-18 10:46:43.175412 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.175418 | orchestrator | Thursday 18 September 2025 10:39:20 +0000 (0:00:00.391) 0:03:47.851 **** 2025-09-18 10:46:43.175423 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175428 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175434 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175439 | orchestrator | 2025-09-18 10:46:43.175444 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.175450 | orchestrator | Thursday 18 September 2025 10:39:21 +0000 (0:00:00.388) 0:03:48.239 **** 2025-09-18 10:46:43.175455 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175461 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175466 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175471 | orchestrator | 2025-09-18 10:46:43.175477 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.175482 | orchestrator | Thursday 18 September 2025 10:39:21 +0000 (0:00:00.382) 0:03:48.622 **** 2025-09-18 10:46:43.175488 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175493 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175498 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175504 | orchestrator | 2025-09-18 10:46:43.175512 | orchestrator | TASK [ceph-mon : Set_fact container_exec_cmd] ********************************** 2025-09-18 10:46:43.175518 | orchestrator | Thursday 18 September 2025 10:39:22 +0000 (0:00:00.773) 0:03:49.395 **** 2025-09-18 10:46:43.175523 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175529 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175534 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175540 | orchestrator | 2025-09-18 10:46:43.175545 | orchestrator | TASK [ceph-mon : Include deploy_monitors.yml] ********************************** 2025-09-18 10:46:43.175550 | orchestrator | Thursday 18 September 2025 10:39:23 +0000 (0:00:00.822) 0:03:50.218 **** 2025-09-18 10:46:43.175556 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-1, testbed-node-2, testbed-node-0 2025-09-18 10:46:43.175561 | orchestrator | 2025-09-18 10:46:43.175567 | orchestrator | TASK [ceph-mon : Check if monitor initial keyring already exists] ************** 2025-09-18 10:46:43.175575 | orchestrator | Thursday 18 September 2025 10:39:23 +0000 (0:00:00.653) 0:03:50.871 **** 2025-09-18 10:46:43.175581 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.175586 | orchestrator | 2025-09-18 10:46:43.175595 | orchestrator | TASK [ceph-mon : Generate monitor initial keyring] ***************************** 2025-09-18 10:46:43.175601 | orchestrator | Thursday 18 September 2025 10:39:24 +0000 (0:00:00.418) 0:03:51.289 **** 2025-09-18 10:46:43.175606 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-09-18 10:46:43.175612 | orchestrator | 2025-09-18 10:46:43.175617 | orchestrator | TASK [ceph-mon : Set_fact _initial_mon_key_success] **************************** 2025-09-18 10:46:43.175622 | orchestrator | Thursday 18 September 2025 10:39:25 +0000 (0:00:00.833) 0:03:52.123 **** 2025-09-18 10:46:43.175628 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175633 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175638 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175656 | orchestrator | 2025-09-18 10:46:43.175662 | orchestrator | TASK [ceph-mon : Get initial keyring when it already exists] ******************* 2025-09-18 10:46:43.175667 | orchestrator | Thursday 18 September 2025 10:39:25 +0000 (0:00:00.344) 0:03:52.467 **** 2025-09-18 10:46:43.175673 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175678 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175684 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175689 | orchestrator | 2025-09-18 10:46:43.175695 | orchestrator | TASK [ceph-mon : Create monitor initial keyring] ******************************* 2025-09-18 10:46:43.175700 | orchestrator | Thursday 18 September 2025 10:39:25 +0000 (0:00:00.283) 0:03:52.751 **** 2025-09-18 10:46:43.175705 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.175711 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.175716 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.175722 | orchestrator | 2025-09-18 10:46:43.175727 | orchestrator | TASK [ceph-mon : Copy the initial key in /etc/ceph (for containers)] *********** 2025-09-18 10:46:43.175733 | orchestrator | Thursday 18 September 2025 10:39:26 +0000 (0:00:01.072) 0:03:53.823 **** 2025-09-18 10:46:43.175738 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.175743 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.175749 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.175754 | orchestrator | 2025-09-18 10:46:43.175760 | orchestrator | TASK [ceph-mon : Create monitor directory] ************************************* 2025-09-18 10:46:43.175765 | orchestrator | Thursday 18 September 2025 10:39:27 +0000 (0:00:00.987) 0:03:54.811 **** 2025-09-18 10:46:43.175770 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.175776 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.175781 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.175787 | orchestrator | 2025-09-18 10:46:43.175792 | orchestrator | TASK [ceph-mon : Recursively fix ownership of monitor directory] *************** 2025-09-18 10:46:43.175797 | orchestrator | Thursday 18 September 2025 10:39:28 +0000 (0:00:00.699) 0:03:55.511 **** 2025-09-18 10:46:43.175803 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175808 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.175814 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.175819 | orchestrator | 2025-09-18 10:46:43.175824 | orchestrator | TASK [ceph-mon : Create admin keyring] ***************************************** 2025-09-18 10:46:43.175830 | orchestrator | Thursday 18 September 2025 10:39:29 +0000 (0:00:00.722) 0:03:56.233 **** 2025-09-18 10:46:43.175835 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.175841 | orchestrator | 2025-09-18 10:46:43.175846 | orchestrator | TASK [ceph-mon : Slurp admin keyring] ****************************************** 2025-09-18 10:46:43.175851 | orchestrator | Thursday 18 September 2025 10:39:30 +0000 (0:00:01.139) 0:03:57.372 **** 2025-09-18 10:46:43.175857 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.175862 | orchestrator | 2025-09-18 10:46:43.175868 | orchestrator | TASK [ceph-mon : Copy admin keyring over to mons] ****************************** 2025-09-18 10:46:43.175873 | orchestrator | Thursday 18 September 2025 10:39:30 +0000 (0:00:00.694) 0:03:58.067 **** 2025-09-18 10:46:43.175878 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.175888 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.175893 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.175899 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:46:43.175904 | orchestrator | ok: [testbed-node-1] => (item=None) 2025-09-18 10:46:43.175910 | orchestrator | ok: [testbed-node-2 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:46:43.175915 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:46:43.175921 | orchestrator | changed: [testbed-node-0 -> {{ item }}] 2025-09-18 10:46:43.175926 | orchestrator | ok: [testbed-node-2] => (item=None) 2025-09-18 10:46:43.175931 | orchestrator | ok: [testbed-node-2 -> {{ item }}] 2025-09-18 10:46:43.175937 | orchestrator | ok: [testbed-node-1 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:46:43.175942 | orchestrator | ok: [testbed-node-1 -> {{ item }}] 2025-09-18 10:46:43.175948 | orchestrator | 2025-09-18 10:46:43.175953 | orchestrator | TASK [ceph-mon : Import admin keyring into mon keyring] ************************ 2025-09-18 10:46:43.175958 | orchestrator | Thursday 18 September 2025 10:39:34 +0000 (0:00:03.321) 0:04:01.389 **** 2025-09-18 10:46:43.175964 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.175972 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.175978 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.175983 | orchestrator | 2025-09-18 10:46:43.175989 | orchestrator | TASK [ceph-mon : Set_fact ceph-mon container command] ************************** 2025-09-18 10:46:43.175994 | orchestrator | Thursday 18 September 2025 10:39:35 +0000 (0:00:01.311) 0:04:02.701 **** 2025-09-18 10:46:43.176000 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176005 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176011 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176016 | orchestrator | 2025-09-18 10:46:43.176021 | orchestrator | TASK [ceph-mon : Set_fact monmaptool container command] ************************ 2025-09-18 10:46:43.176027 | orchestrator | Thursday 18 September 2025 10:39:35 +0000 (0:00:00.296) 0:04:02.998 **** 2025-09-18 10:46:43.176032 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176038 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176043 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176049 | orchestrator | 2025-09-18 10:46:43.176054 | orchestrator | TASK [ceph-mon : Generate initial monmap] ************************************** 2025-09-18 10:46:43.176060 | orchestrator | Thursday 18 September 2025 10:39:36 +0000 (0:00:00.279) 0:04:03.277 **** 2025-09-18 10:46:43.176065 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176074 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176079 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176085 | orchestrator | 2025-09-18 10:46:43.176090 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs with keyring] ******************************* 2025-09-18 10:46:43.176096 | orchestrator | Thursday 18 September 2025 10:39:37 +0000 (0:00:01.587) 0:04:04.864 **** 2025-09-18 10:46:43.176101 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176106 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176112 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176117 | orchestrator | 2025-09-18 10:46:43.176123 | orchestrator | TASK [ceph-mon : Ceph monitor mkfs without keyring] **************************** 2025-09-18 10:46:43.176128 | orchestrator | Thursday 18 September 2025 10:39:39 +0000 (0:00:01.308) 0:04:06.173 **** 2025-09-18 10:46:43.176133 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176139 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176144 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176150 | orchestrator | 2025-09-18 10:46:43.176155 | orchestrator | TASK [ceph-mon : Include start_monitor.yml] ************************************ 2025-09-18 10:46:43.176161 | orchestrator | Thursday 18 September 2025 10:39:39 +0000 (0:00:00.266) 0:04:06.440 **** 2025-09-18 10:46:43.176166 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176175 | orchestrator | 2025-09-18 10:46:43.176181 | orchestrator | TASK [ceph-mon : Ensure systemd service override directory exists] ************* 2025-09-18 10:46:43.176186 | orchestrator | Thursday 18 September 2025 10:39:39 +0000 (0:00:00.486) 0:04:06.926 **** 2025-09-18 10:46:43.176191 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176197 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176202 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176207 | orchestrator | 2025-09-18 10:46:43.176213 | orchestrator | TASK [ceph-mon : Add ceph-mon systemd service overrides] *********************** 2025-09-18 10:46:43.176218 | orchestrator | Thursday 18 September 2025 10:39:40 +0000 (0:00:00.469) 0:04:07.395 **** 2025-09-18 10:46:43.176223 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176229 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176234 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176240 | orchestrator | 2025-09-18 10:46:43.176245 | orchestrator | TASK [ceph-mon : Include_tasks systemd.yml] ************************************ 2025-09-18 10:46:43.176251 | orchestrator | Thursday 18 September 2025 10:39:40 +0000 (0:00:00.306) 0:04:07.702 **** 2025-09-18 10:46:43.176256 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176261 | orchestrator | 2025-09-18 10:46:43.176267 | orchestrator | TASK [ceph-mon : Generate systemd unit file for mon container] ***************** 2025-09-18 10:46:43.176272 | orchestrator | Thursday 18 September 2025 10:39:41 +0000 (0:00:00.497) 0:04:08.200 **** 2025-09-18 10:46:43.176278 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176283 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176289 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176294 | orchestrator | 2025-09-18 10:46:43.176299 | orchestrator | TASK [ceph-mon : Generate systemd ceph-mon target file] ************************ 2025-09-18 10:46:43.176305 | orchestrator | Thursday 18 September 2025 10:39:43 +0000 (0:00:02.200) 0:04:10.400 **** 2025-09-18 10:46:43.176310 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176316 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176321 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176326 | orchestrator | 2025-09-18 10:46:43.176332 | orchestrator | TASK [ceph-mon : Enable ceph-mon.target] *************************************** 2025-09-18 10:46:43.176337 | orchestrator | Thursday 18 September 2025 10:39:44 +0000 (0:00:01.375) 0:04:11.776 **** 2025-09-18 10:46:43.176343 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176348 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176353 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176359 | orchestrator | 2025-09-18 10:46:43.176364 | orchestrator | TASK [ceph-mon : Start the monitor service] ************************************ 2025-09-18 10:46:43.176369 | orchestrator | Thursday 18 September 2025 10:39:46 +0000 (0:00:01.750) 0:04:13.526 **** 2025-09-18 10:46:43.176375 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.176380 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.176385 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.176391 | orchestrator | 2025-09-18 10:46:43.176396 | orchestrator | TASK [ceph-mon : Include_tasks ceph_keys.yml] ********************************** 2025-09-18 10:46:43.176402 | orchestrator | Thursday 18 September 2025 10:39:48 +0000 (0:00:02.098) 0:04:15.625 **** 2025-09-18 10:46:43.176407 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176413 | orchestrator | 2025-09-18 10:46:43.176418 | orchestrator | TASK [ceph-mon : Waiting for the monitor(s) to form the quorum...] ************* 2025-09-18 10:46:43.176423 | orchestrator | Thursday 18 September 2025 10:39:49 +0000 (0:00:00.707) 0:04:16.333 **** 2025-09-18 10:46:43.176432 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for the monitor(s) to form the quorum... (10 retries left). 2025-09-18 10:46:43.176437 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176443 | orchestrator | 2025-09-18 10:46:43.176448 | orchestrator | TASK [ceph-mon : Fetch ceph initial keys] ************************************** 2025-09-18 10:46:43.176459 | orchestrator | Thursday 18 September 2025 10:40:11 +0000 (0:00:21.810) 0:04:38.143 **** 2025-09-18 10:46:43.176465 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176470 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176476 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176481 | orchestrator | 2025-09-18 10:46:43.176486 | orchestrator | TASK [ceph-mon : Include secure_cluster.yml] *********************************** 2025-09-18 10:46:43.176492 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:10.644) 0:04:48.787 **** 2025-09-18 10:46:43.176497 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176503 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176508 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176514 | orchestrator | 2025-09-18 10:46:43.176519 | orchestrator | TASK [ceph-mon : Set cluster configs] ****************************************** 2025-09-18 10:46:43.176527 | orchestrator | Thursday 18 September 2025 10:40:21 +0000 (0:00:00.289) 0:04:49.077 **** 2025-09-18 10:46:43.176534 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'public_network', 'value': '192.168.16.0/20'}]) 2025-09-18 10:46:43.176541 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'cluster_network', 'value': '192.168.16.0/20'}]) 2025-09-18 10:46:43.176547 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'osd_pool_default_crush_rule', 'value': -1}]) 2025-09-18 10:46:43.176553 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'ms_bind_ipv6', 'value': 'False'}]) 2025-09-18 10:46:43.176559 | orchestrator | changed: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'ms_bind_ipv4', 'value': 'True'}]) 2025-09-18 10:46:43.176565 | orchestrator | skipping: [testbed-node-0] => (item=[{'key': 'global', 'value': {'public_network': '192.168.16.0/20', 'cluster_network': '192.168.16.0/20', 'osd_pool_default_crush_rule': -1, 'ms_bind_ipv6': 'False', 'ms_bind_ipv4': 'True', 'osd_crush_chooseleaf_type': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}}, {'key': 'osd_crush_chooseleaf_type', 'value': '__omit_place_holder__a251c6f414ec0bd31910f8e75aaf44b062acd728'}])  2025-09-18 10:46:43.176571 | orchestrator | 2025-09-18 10:46:43.176577 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.176582 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:14.548) 0:05:03.625 **** 2025-09-18 10:46:43.176588 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176593 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176598 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176604 | orchestrator | 2025-09-18 10:46:43.176609 | orchestrator | RUNNING HANDLER [ceph-handler : Mons handler] ********************************** 2025-09-18 10:46:43.176620 | orchestrator | Thursday 18 September 2025 10:40:36 +0000 (0:00:00.358) 0:05:03.983 **** 2025-09-18 10:46:43.176626 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176631 | orchestrator | 2025-09-18 10:46:43.176636 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called before restart] ******** 2025-09-18 10:46:43.176653 | orchestrator | Thursday 18 September 2025 10:40:37 +0000 (0:00:00.625) 0:05:04.609 **** 2025-09-18 10:46:43.176659 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176665 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176670 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176675 | orchestrator | 2025-09-18 10:46:43.176681 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mon restart script] *********************** 2025-09-18 10:46:43.176689 | orchestrator | Thursday 18 September 2025 10:40:37 +0000 (0:00:00.408) 0:05:05.018 **** 2025-09-18 10:46:43.176695 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176700 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176706 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176711 | orchestrator | 2025-09-18 10:46:43.176716 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mon daemon(s)] ******************** 2025-09-18 10:46:43.176722 | orchestrator | Thursday 18 September 2025 10:40:38 +0000 (0:00:00.363) 0:05:05.382 **** 2025-09-18 10:46:43.176727 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:46:43.176733 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:46:43.176738 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:46:43.176744 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176749 | orchestrator | 2025-09-18 10:46:43.176754 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mon_handler_called after restart] ********* 2025-09-18 10:46:43.176760 | orchestrator | Thursday 18 September 2025 10:40:38 +0000 (0:00:00.611) 0:05:05.993 **** 2025-09-18 10:46:43.176765 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176771 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176779 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176785 | orchestrator | 2025-09-18 10:46:43.176790 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-09-18 10:46:43.176796 | orchestrator | 2025-09-18 10:46:43.176801 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.176806 | orchestrator | Thursday 18 September 2025 10:40:39 +0000 (0:00:00.808) 0:05:06.802 **** 2025-09-18 10:46:43.176812 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176818 | orchestrator | 2025-09-18 10:46:43.176823 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.176828 | orchestrator | Thursday 18 September 2025 10:40:40 +0000 (0:00:00.507) 0:05:07.310 **** 2025-09-18 10:46:43.176834 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.176839 | orchestrator | 2025-09-18 10:46:43.176845 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.176850 | orchestrator | Thursday 18 September 2025 10:40:40 +0000 (0:00:00.514) 0:05:07.825 **** 2025-09-18 10:46:43.176856 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176861 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.176866 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.176872 | orchestrator | 2025-09-18 10:46:43.176877 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.176883 | orchestrator | Thursday 18 September 2025 10:40:41 +0000 (0:00:01.117) 0:05:08.942 **** 2025-09-18 10:46:43.176888 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176893 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176899 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176904 | orchestrator | 2025-09-18 10:46:43.176910 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.176919 | orchestrator | Thursday 18 September 2025 10:40:42 +0000 (0:00:00.356) 0:05:09.298 **** 2025-09-18 10:46:43.176924 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176930 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176935 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176941 | orchestrator | 2025-09-18 10:46:43.176946 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.176951 | orchestrator | Thursday 18 September 2025 10:40:42 +0000 (0:00:00.323) 0:05:09.622 **** 2025-09-18 10:46:43.176957 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.176962 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.176968 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.176973 | orchestrator | 2025-09-18 10:46:43.176978 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.176984 | orchestrator | Thursday 18 September 2025 10:40:42 +0000 (0:00:00.345) 0:05:09.967 **** 2025-09-18 10:46:43.176989 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.176995 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177000 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177006 | orchestrator | 2025-09-18 10:46:43.177011 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.177016 | orchestrator | Thursday 18 September 2025 10:40:43 +0000 (0:00:01.087) 0:05:11.054 **** 2025-09-18 10:46:43.177022 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177027 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177032 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177038 | orchestrator | 2025-09-18 10:46:43.177043 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.177049 | orchestrator | Thursday 18 September 2025 10:40:44 +0000 (0:00:00.355) 0:05:11.409 **** 2025-09-18 10:46:43.177054 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177060 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177065 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177070 | orchestrator | 2025-09-18 10:46:43.177076 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.177081 | orchestrator | Thursday 18 September 2025 10:40:44 +0000 (0:00:00.318) 0:05:11.728 **** 2025-09-18 10:46:43.177086 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177092 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177097 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177103 | orchestrator | 2025-09-18 10:46:43.177108 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.177114 | orchestrator | Thursday 18 September 2025 10:40:45 +0000 (0:00:00.737) 0:05:12.465 **** 2025-09-18 10:46:43.177119 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177124 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177130 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177135 | orchestrator | 2025-09-18 10:46:43.177141 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.177146 | orchestrator | Thursday 18 September 2025 10:40:46 +0000 (0:00:01.163) 0:05:13.629 **** 2025-09-18 10:46:43.177154 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177160 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177165 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177171 | orchestrator | 2025-09-18 10:46:43.177176 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.177181 | orchestrator | Thursday 18 September 2025 10:40:46 +0000 (0:00:00.323) 0:05:13.953 **** 2025-09-18 10:46:43.177187 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177192 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177198 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177203 | orchestrator | 2025-09-18 10:46:43.177209 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.177214 | orchestrator | Thursday 18 September 2025 10:40:47 +0000 (0:00:00.358) 0:05:14.311 **** 2025-09-18 10:46:43.177223 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177228 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177234 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177239 | orchestrator | 2025-09-18 10:46:43.177245 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.177250 | orchestrator | Thursday 18 September 2025 10:40:47 +0000 (0:00:00.313) 0:05:14.624 **** 2025-09-18 10:46:43.177258 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177264 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177269 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177275 | orchestrator | 2025-09-18 10:46:43.177280 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.177285 | orchestrator | Thursday 18 September 2025 10:40:48 +0000 (0:00:00.611) 0:05:15.235 **** 2025-09-18 10:46:43.177291 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177296 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177302 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177307 | orchestrator | 2025-09-18 10:46:43.177312 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.177318 | orchestrator | Thursday 18 September 2025 10:40:48 +0000 (0:00:00.316) 0:05:15.552 **** 2025-09-18 10:46:43.177323 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177329 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177334 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177339 | orchestrator | 2025-09-18 10:46:43.177345 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.177350 | orchestrator | Thursday 18 September 2025 10:40:48 +0000 (0:00:00.323) 0:05:15.876 **** 2025-09-18 10:46:43.177355 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177361 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177366 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177372 | orchestrator | 2025-09-18 10:46:43.177377 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.177382 | orchestrator | Thursday 18 September 2025 10:40:49 +0000 (0:00:00.327) 0:05:16.203 **** 2025-09-18 10:46:43.177388 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177393 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177399 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177404 | orchestrator | 2025-09-18 10:46:43.177409 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.177415 | orchestrator | Thursday 18 September 2025 10:40:49 +0000 (0:00:00.355) 0:05:16.558 **** 2025-09-18 10:46:43.177420 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177426 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177431 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177437 | orchestrator | 2025-09-18 10:46:43.177442 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.177448 | orchestrator | Thursday 18 September 2025 10:40:50 +0000 (0:00:00.814) 0:05:17.373 **** 2025-09-18 10:46:43.177453 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177458 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177464 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177469 | orchestrator | 2025-09-18 10:46:43.177475 | orchestrator | TASK [ceph-mgr : Set_fact container_exec_cmd] ********************************** 2025-09-18 10:46:43.177480 | orchestrator | Thursday 18 September 2025 10:40:50 +0000 (0:00:00.576) 0:05:17.949 **** 2025-09-18 10:46:43.177485 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 10:46:43.177491 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.177496 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.177502 | orchestrator | 2025-09-18 10:46:43.177507 | orchestrator | TASK [ceph-mgr : Include common.yml] ******************************************* 2025-09-18 10:46:43.177516 | orchestrator | Thursday 18 September 2025 10:40:51 +0000 (0:00:00.734) 0:05:18.684 **** 2025-09-18 10:46:43.177522 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.177527 | orchestrator | 2025-09-18 10:46:43.177533 | orchestrator | TASK [ceph-mgr : Create mgr directory] ***************************************** 2025-09-18 10:46:43.177538 | orchestrator | Thursday 18 September 2025 10:40:52 +0000 (0:00:00.653) 0:05:19.337 **** 2025-09-18 10:46:43.177543 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.177549 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.177554 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.177559 | orchestrator | 2025-09-18 10:46:43.177565 | orchestrator | TASK [ceph-mgr : Fetch ceph mgr keyring] *************************************** 2025-09-18 10:46:43.177570 | orchestrator | Thursday 18 September 2025 10:40:52 +0000 (0:00:00.714) 0:05:20.052 **** 2025-09-18 10:46:43.177576 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177581 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177587 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177592 | orchestrator | 2025-09-18 10:46:43.177597 | orchestrator | TASK [ceph-mgr : Create ceph mgr keyring(s) on a mon node] ********************* 2025-09-18 10:46:43.177603 | orchestrator | Thursday 18 September 2025 10:40:53 +0000 (0:00:00.269) 0:05:20.321 **** 2025-09-18 10:46:43.177608 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.177614 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.177619 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.177627 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-09-18 10:46:43.177633 | orchestrator | 2025-09-18 10:46:43.177638 | orchestrator | TASK [ceph-mgr : Set_fact _mgr_keys] ******************************************* 2025-09-18 10:46:43.177670 | orchestrator | Thursday 18 September 2025 10:41:03 +0000 (0:00:10.586) 0:05:30.909 **** 2025-09-18 10:46:43.177676 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177681 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177687 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177692 | orchestrator | 2025-09-18 10:46:43.177698 | orchestrator | TASK [ceph-mgr : Get keys from monitors] *************************************** 2025-09-18 10:46:43.177703 | orchestrator | Thursday 18 September 2025 10:41:04 +0000 (0:00:00.630) 0:05:31.540 **** 2025-09-18 10:46:43.177709 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 10:46:43.177714 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 10:46:43.177720 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 10:46:43.177725 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.177730 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.177740 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.177745 | orchestrator | 2025-09-18 10:46:43.177751 | orchestrator | TASK [ceph-mgr : Copy ceph key(s) if needed] *********************************** 2025-09-18 10:46:43.177756 | orchestrator | Thursday 18 September 2025 10:41:06 +0000 (0:00:02.329) 0:05:33.869 **** 2025-09-18 10:46:43.177762 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 10:46:43.177767 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 10:46:43.177773 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 10:46:43.177778 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:46:43.177784 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-09-18 10:46:43.177789 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-09-18 10:46:43.177795 | orchestrator | 2025-09-18 10:46:43.177799 | orchestrator | TASK [ceph-mgr : Set mgr key permissions] ************************************** 2025-09-18 10:46:43.177804 | orchestrator | Thursday 18 September 2025 10:41:08 +0000 (0:00:01.249) 0:05:35.119 **** 2025-09-18 10:46:43.177809 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.177814 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.177822 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.177827 | orchestrator | 2025-09-18 10:46:43.177832 | orchestrator | TASK [ceph-mgr : Append dashboard modules to ceph_mgr_modules] ***************** 2025-09-18 10:46:43.177836 | orchestrator | Thursday 18 September 2025 10:41:08 +0000 (0:00:00.695) 0:05:35.815 **** 2025-09-18 10:46:43.177841 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177846 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177851 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177856 | orchestrator | 2025-09-18 10:46:43.177860 | orchestrator | TASK [ceph-mgr : Include pre_requisite.yml] ************************************ 2025-09-18 10:46:43.177865 | orchestrator | Thursday 18 September 2025 10:41:09 +0000 (0:00:00.325) 0:05:36.141 **** 2025-09-18 10:46:43.177870 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177875 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177880 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177884 | orchestrator | 2025-09-18 10:46:43.177889 | orchestrator | TASK [ceph-mgr : Include start_mgr.yml] **************************************** 2025-09-18 10:46:43.177894 | orchestrator | Thursday 18 September 2025 10:41:09 +0000 (0:00:00.603) 0:05:36.744 **** 2025-09-18 10:46:43.177899 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.177904 | orchestrator | 2025-09-18 10:46:43.177908 | orchestrator | TASK [ceph-mgr : Ensure systemd service override directory exists] ************* 2025-09-18 10:46:43.177913 | orchestrator | Thursday 18 September 2025 10:41:10 +0000 (0:00:00.567) 0:05:37.312 **** 2025-09-18 10:46:43.177918 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177923 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177928 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177933 | orchestrator | 2025-09-18 10:46:43.177937 | orchestrator | TASK [ceph-mgr : Add ceph-mgr systemd service overrides] *********************** 2025-09-18 10:46:43.177943 | orchestrator | Thursday 18 September 2025 10:41:10 +0000 (0:00:00.322) 0:05:37.634 **** 2025-09-18 10:46:43.177947 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.177952 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.177957 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.177962 | orchestrator | 2025-09-18 10:46:43.177966 | orchestrator | TASK [ceph-mgr : Include_tasks systemd.yml] ************************************ 2025-09-18 10:46:43.177971 | orchestrator | Thursday 18 September 2025 10:41:11 +0000 (0:00:00.687) 0:05:38.321 **** 2025-09-18 10:46:43.177976 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.177981 | orchestrator | 2025-09-18 10:46:43.177986 | orchestrator | TASK [ceph-mgr : Generate systemd unit file] *********************************** 2025-09-18 10:46:43.177991 | orchestrator | Thursday 18 September 2025 10:41:11 +0000 (0:00:00.541) 0:05:38.863 **** 2025-09-18 10:46:43.177995 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178000 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178005 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178010 | orchestrator | 2025-09-18 10:46:43.178070 | orchestrator | TASK [ceph-mgr : Generate systemd ceph-mgr target file] ************************ 2025-09-18 10:46:43.178078 | orchestrator | Thursday 18 September 2025 10:41:13 +0000 (0:00:01.262) 0:05:40.126 **** 2025-09-18 10:46:43.178083 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178087 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178092 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178097 | orchestrator | 2025-09-18 10:46:43.178102 | orchestrator | TASK [ceph-mgr : Enable ceph-mgr.target] *************************************** 2025-09-18 10:46:43.178107 | orchestrator | Thursday 18 September 2025 10:41:14 +0000 (0:00:01.271) 0:05:41.398 **** 2025-09-18 10:46:43.178111 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178116 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178121 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178126 | orchestrator | 2025-09-18 10:46:43.178134 | orchestrator | TASK [ceph-mgr : Systemd start mgr] ******************************************** 2025-09-18 10:46:43.178143 | orchestrator | Thursday 18 September 2025 10:41:16 +0000 (0:00:01.808) 0:05:43.206 **** 2025-09-18 10:46:43.178148 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178153 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178158 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178162 | orchestrator | 2025-09-18 10:46:43.178167 | orchestrator | TASK [ceph-mgr : Include mgr_modules.yml] ************************************** 2025-09-18 10:46:43.178172 | orchestrator | Thursday 18 September 2025 10:41:18 +0000 (0:00:02.065) 0:05:45.271 **** 2025-09-18 10:46:43.178177 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.178182 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.178186 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-09-18 10:46:43.178191 | orchestrator | 2025-09-18 10:46:43.178196 | orchestrator | TASK [ceph-mgr : Wait for all mgr to be up] ************************************ 2025-09-18 10:46:43.178201 | orchestrator | Thursday 18 September 2025 10:41:18 +0000 (0:00:00.408) 0:05:45.680 **** 2025-09-18 10:46:43.178219 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (30 retries left). 2025-09-18 10:46:43.178225 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (29 retries left). 2025-09-18 10:46:43.178230 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (28 retries left). 2025-09-18 10:46:43.178235 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (27 retries left). 2025-09-18 10:46:43.178240 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (26 retries left). 2025-09-18 10:46:43.178245 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: Wait for all mgr to be up (25 retries left). 2025-09-18 10:46:43.178249 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.178254 | orchestrator | 2025-09-18 10:46:43.178259 | orchestrator | TASK [ceph-mgr : Get enabled modules from ceph-mgr] **************************** 2025-09-18 10:46:43.178264 | orchestrator | Thursday 18 September 2025 10:41:55 +0000 (0:00:36.650) 0:06:22.330 **** 2025-09-18 10:46:43.178269 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.178274 | orchestrator | 2025-09-18 10:46:43.178279 | orchestrator | TASK [ceph-mgr : Set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-09-18 10:46:43.178283 | orchestrator | Thursday 18 September 2025 10:41:56 +0000 (0:00:01.273) 0:06:23.604 **** 2025-09-18 10:46:43.178288 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.178293 | orchestrator | 2025-09-18 10:46:43.178298 | orchestrator | TASK [ceph-mgr : Set _disabled_ceph_mgr_modules fact] ************************** 2025-09-18 10:46:43.178303 | orchestrator | Thursday 18 September 2025 10:41:56 +0000 (0:00:00.269) 0:06:23.873 **** 2025-09-18 10:46:43.178307 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.178312 | orchestrator | 2025-09-18 10:46:43.178317 | orchestrator | TASK [ceph-mgr : Disable ceph mgr enabled modules] ***************************** 2025-09-18 10:46:43.178322 | orchestrator | Thursday 18 September 2025 10:41:56 +0000 (0:00:00.126) 0:06:24.000 **** 2025-09-18 10:46:43.178327 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-09-18 10:46:43.178331 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-09-18 10:46:43.178336 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-09-18 10:46:43.178341 | orchestrator | 2025-09-18 10:46:43.178346 | orchestrator | TASK [ceph-mgr : Add modules to ceph-mgr] ************************************** 2025-09-18 10:46:43.178350 | orchestrator | Thursday 18 September 2025 10:42:03 +0000 (0:00:06.459) 0:06:30.460 **** 2025-09-18 10:46:43.178355 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-09-18 10:46:43.178360 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-09-18 10:46:43.178365 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-09-18 10:46:43.178374 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-09-18 10:46:43.178379 | orchestrator | 2025-09-18 10:46:43.178383 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.178388 | orchestrator | Thursday 18 September 2025 10:42:08 +0000 (0:00:04.857) 0:06:35.318 **** 2025-09-18 10:46:43.178393 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178398 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178403 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178408 | orchestrator | 2025-09-18 10:46:43.178412 | orchestrator | RUNNING HANDLER [ceph-handler : Mgrs handler] ********************************** 2025-09-18 10:46:43.178417 | orchestrator | Thursday 18 September 2025 10:42:09 +0000 (0:00:01.036) 0:06:36.354 **** 2025-09-18 10:46:43.178422 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.178427 | orchestrator | 2025-09-18 10:46:43.178432 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called before restart] ******** 2025-09-18 10:46:43.178437 | orchestrator | Thursday 18 September 2025 10:42:09 +0000 (0:00:00.556) 0:06:36.911 **** 2025-09-18 10:46:43.178441 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.178446 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.178451 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.178456 | orchestrator | 2025-09-18 10:46:43.178461 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mgr restart script] *********************** 2025-09-18 10:46:43.178466 | orchestrator | Thursday 18 September 2025 10:42:10 +0000 (0:00:00.344) 0:06:37.256 **** 2025-09-18 10:46:43.178470 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.178475 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.178480 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.178485 | orchestrator | 2025-09-18 10:46:43.178492 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mgr daemon(s)] ******************** 2025-09-18 10:46:43.178497 | orchestrator | Thursday 18 September 2025 10:42:11 +0000 (0:00:01.639) 0:06:38.895 **** 2025-09-18 10:46:43.178502 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-09-18 10:46:43.178507 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-09-18 10:46:43.178512 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-09-18 10:46:43.178517 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.178522 | orchestrator | 2025-09-18 10:46:43.178526 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mgr_handler_called after restart] ********* 2025-09-18 10:46:43.178531 | orchestrator | Thursday 18 September 2025 10:42:12 +0000 (0:00:00.629) 0:06:39.524 **** 2025-09-18 10:46:43.178536 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.178541 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.178546 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.178550 | orchestrator | 2025-09-18 10:46:43.178555 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-09-18 10:46:43.178560 | orchestrator | 2025-09-18 10:46:43.178565 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.178582 | orchestrator | Thursday 18 September 2025 10:42:13 +0000 (0:00:00.625) 0:06:40.150 **** 2025-09-18 10:46:43.178587 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.178592 | orchestrator | 2025-09-18 10:46:43.178597 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.178602 | orchestrator | Thursday 18 September 2025 10:42:13 +0000 (0:00:00.794) 0:06:40.944 **** 2025-09-18 10:46:43.178607 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.178612 | orchestrator | 2025-09-18 10:46:43.178616 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.178621 | orchestrator | Thursday 18 September 2025 10:42:14 +0000 (0:00:00.555) 0:06:41.500 **** 2025-09-18 10:46:43.178630 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178635 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178640 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178661 | orchestrator | 2025-09-18 10:46:43.178666 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.178671 | orchestrator | Thursday 18 September 2025 10:42:14 +0000 (0:00:00.305) 0:06:41.805 **** 2025-09-18 10:46:43.178675 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178680 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178685 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178690 | orchestrator | 2025-09-18 10:46:43.178695 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.178700 | orchestrator | Thursday 18 September 2025 10:42:15 +0000 (0:00:01.000) 0:06:42.806 **** 2025-09-18 10:46:43.178704 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178709 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178714 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178719 | orchestrator | 2025-09-18 10:46:43.178724 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.178729 | orchestrator | Thursday 18 September 2025 10:42:16 +0000 (0:00:00.724) 0:06:43.530 **** 2025-09-18 10:46:43.178733 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178738 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178743 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178748 | orchestrator | 2025-09-18 10:46:43.178753 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.178757 | orchestrator | Thursday 18 September 2025 10:42:17 +0000 (0:00:00.698) 0:06:44.229 **** 2025-09-18 10:46:43.178762 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178767 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178772 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178777 | orchestrator | 2025-09-18 10:46:43.178781 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.178786 | orchestrator | Thursday 18 September 2025 10:42:17 +0000 (0:00:00.249) 0:06:44.478 **** 2025-09-18 10:46:43.178791 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178796 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178801 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178806 | orchestrator | 2025-09-18 10:46:43.178810 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.178815 | orchestrator | Thursday 18 September 2025 10:42:17 +0000 (0:00:00.422) 0:06:44.901 **** 2025-09-18 10:46:43.178820 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178825 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178830 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178835 | orchestrator | 2025-09-18 10:46:43.178839 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.178844 | orchestrator | Thursday 18 September 2025 10:42:18 +0000 (0:00:00.281) 0:06:45.182 **** 2025-09-18 10:46:43.178849 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178854 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178859 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178863 | orchestrator | 2025-09-18 10:46:43.178868 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.178873 | orchestrator | Thursday 18 September 2025 10:42:18 +0000 (0:00:00.648) 0:06:45.830 **** 2025-09-18 10:46:43.178878 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178883 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178887 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178892 | orchestrator | 2025-09-18 10:46:43.178897 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.178902 | orchestrator | Thursday 18 September 2025 10:42:19 +0000 (0:00:00.684) 0:06:46.515 **** 2025-09-18 10:46:43.178907 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178916 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178921 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178926 | orchestrator | 2025-09-18 10:46:43.178931 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.178936 | orchestrator | Thursday 18 September 2025 10:42:19 +0000 (0:00:00.419) 0:06:46.934 **** 2025-09-18 10:46:43.178940 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.178945 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.178950 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.178955 | orchestrator | 2025-09-18 10:46:43.178960 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.178965 | orchestrator | Thursday 18 September 2025 10:42:20 +0000 (0:00:00.260) 0:06:47.194 **** 2025-09-18 10:46:43.178969 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.178974 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.178979 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.178984 | orchestrator | 2025-09-18 10:46:43.178988 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.178993 | orchestrator | Thursday 18 September 2025 10:42:20 +0000 (0:00:00.268) 0:06:47.462 **** 2025-09-18 10:46:43.178998 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179003 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179008 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179013 | orchestrator | 2025-09-18 10:46:43.179017 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.179025 | orchestrator | Thursday 18 September 2025 10:42:20 +0000 (0:00:00.284) 0:06:47.747 **** 2025-09-18 10:46:43.179030 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179035 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179040 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179044 | orchestrator | 2025-09-18 10:46:43.179049 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.179054 | orchestrator | Thursday 18 September 2025 10:42:21 +0000 (0:00:00.435) 0:06:48.183 **** 2025-09-18 10:46:43.179059 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179064 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179069 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179073 | orchestrator | 2025-09-18 10:46:43.179078 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.179083 | orchestrator | Thursday 18 September 2025 10:42:21 +0000 (0:00:00.267) 0:06:48.450 **** 2025-09-18 10:46:43.179088 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179093 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179098 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179103 | orchestrator | 2025-09-18 10:46:43.179107 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.179112 | orchestrator | Thursday 18 September 2025 10:42:21 +0000 (0:00:00.279) 0:06:48.730 **** 2025-09-18 10:46:43.179117 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179122 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179127 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179132 | orchestrator | 2025-09-18 10:46:43.179137 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.179141 | orchestrator | Thursday 18 September 2025 10:42:21 +0000 (0:00:00.286) 0:06:49.017 **** 2025-09-18 10:46:43.179146 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179151 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179156 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179161 | orchestrator | 2025-09-18 10:46:43.179165 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.179170 | orchestrator | Thursday 18 September 2025 10:42:22 +0000 (0:00:00.484) 0:06:49.502 **** 2025-09-18 10:46:43.179175 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179180 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179185 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179194 | orchestrator | 2025-09-18 10:46:43.179199 | orchestrator | TASK [ceph-osd : Set_fact add_osd] ********************************************* 2025-09-18 10:46:43.179204 | orchestrator | Thursday 18 September 2025 10:42:22 +0000 (0:00:00.491) 0:06:49.993 **** 2025-09-18 10:46:43.179208 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179213 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179218 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179223 | orchestrator | 2025-09-18 10:46:43.179228 | orchestrator | TASK [ceph-osd : Set_fact container_exec_cmd] ********************************** 2025-09-18 10:46:43.179272 | orchestrator | Thursday 18 September 2025 10:42:23 +0000 (0:00:00.278) 0:06:50.271 **** 2025-09-18 10:46:43.179283 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:46:43.179288 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:46:43.179293 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:46:43.179298 | orchestrator | 2025-09-18 10:46:43.179303 | orchestrator | TASK [ceph-osd : Include_tasks system_tuning.yml] ****************************** 2025-09-18 10:46:43.179308 | orchestrator | Thursday 18 September 2025 10:42:23 +0000 (0:00:00.765) 0:06:51.037 **** 2025-09-18 10:46:43.179312 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.179317 | orchestrator | 2025-09-18 10:46:43.179322 | orchestrator | TASK [ceph-osd : Create tmpfiles.d directory] ********************************** 2025-09-18 10:46:43.179327 | orchestrator | Thursday 18 September 2025 10:42:24 +0000 (0:00:00.753) 0:06:51.790 **** 2025-09-18 10:46:43.179332 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179336 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179341 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179346 | orchestrator | 2025-09-18 10:46:43.179351 | orchestrator | TASK [ceph-osd : Disable transparent hugepage] ********************************* 2025-09-18 10:46:43.179356 | orchestrator | Thursday 18 September 2025 10:42:25 +0000 (0:00:00.340) 0:06:52.131 **** 2025-09-18 10:46:43.179360 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179365 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179370 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179374 | orchestrator | 2025-09-18 10:46:43.179379 | orchestrator | TASK [ceph-osd : Get default vm.min_free_kbytes] ******************************* 2025-09-18 10:46:43.179384 | orchestrator | Thursday 18 September 2025 10:42:25 +0000 (0:00:00.371) 0:06:52.502 **** 2025-09-18 10:46:43.179389 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179394 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179398 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179403 | orchestrator | 2025-09-18 10:46:43.179411 | orchestrator | TASK [ceph-osd : Set_fact vm_min_free_kbytes] ********************************** 2025-09-18 10:46:43.179416 | orchestrator | Thursday 18 September 2025 10:42:26 +0000 (0:00:00.999) 0:06:53.501 **** 2025-09-18 10:46:43.179421 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179425 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179430 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179435 | orchestrator | 2025-09-18 10:46:43.179440 | orchestrator | TASK [ceph-osd : Apply operating system tuning] ******************************** 2025-09-18 10:46:43.179444 | orchestrator | Thursday 18 September 2025 10:42:26 +0000 (0:00:00.375) 0:06:53.877 **** 2025-09-18 10:46:43.179449 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 10:46:43.179454 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 10:46:43.179459 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-09-18 10:46:43.179468 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 10:46:43.179473 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 10:46:43.179485 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-09-18 10:46:43.179489 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 10:46:43.179494 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 10:46:43.179499 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-09-18 10:46:43.179504 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 10:46:43.179509 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 10:46:43.179513 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-09-18 10:46:43.179518 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 10:46:43.179523 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 10:46:43.179527 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-09-18 10:46:43.179532 | orchestrator | 2025-09-18 10:46:43.179537 | orchestrator | TASK [ceph-osd : Install dependencies] ***************************************** 2025-09-18 10:46:43.179542 | orchestrator | Thursday 18 September 2025 10:42:29 +0000 (0:00:02.943) 0:06:56.820 **** 2025-09-18 10:46:43.179546 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179551 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179556 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179561 | orchestrator | 2025-09-18 10:46:43.179566 | orchestrator | TASK [ceph-osd : Include_tasks common.yml] ************************************* 2025-09-18 10:46:43.179570 | orchestrator | Thursday 18 September 2025 10:42:30 +0000 (0:00:00.308) 0:06:57.129 **** 2025-09-18 10:46:43.179575 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.179580 | orchestrator | 2025-09-18 10:46:43.179585 | orchestrator | TASK [ceph-osd : Create bootstrap-osd and osd directories] ********************* 2025-09-18 10:46:43.179589 | orchestrator | Thursday 18 September 2025 10:42:30 +0000 (0:00:00.881) 0:06:58.010 **** 2025-09-18 10:46:43.179594 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 10:46:43.179599 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 10:46:43.179604 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-09-18 10:46:43.179609 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-09-18 10:46:43.179613 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-09-18 10:46:43.179618 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-09-18 10:46:43.179623 | orchestrator | 2025-09-18 10:46:43.179628 | orchestrator | TASK [ceph-osd : Get keys from monitors] *************************************** 2025-09-18 10:46:43.179632 | orchestrator | Thursday 18 September 2025 10:42:31 +0000 (0:00:01.030) 0:06:59.041 **** 2025-09-18 10:46:43.179637 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.179650 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.179655 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.179660 | orchestrator | 2025-09-18 10:46:43.179665 | orchestrator | TASK [ceph-osd : Copy ceph key(s) if needed] *********************************** 2025-09-18 10:46:43.179670 | orchestrator | Thursday 18 September 2025 10:42:34 +0000 (0:00:02.163) 0:07:01.205 **** 2025-09-18 10:46:43.179675 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 10:46:43.179679 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.179684 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.179689 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 10:46:43.179694 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 10:46:43.179699 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.179707 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 10:46:43.179712 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 10:46:43.179717 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.179722 | orchestrator | 2025-09-18 10:46:43.179727 | orchestrator | TASK [ceph-osd : Set noup flag] ************************************************ 2025-09-18 10:46:43.179732 | orchestrator | Thursday 18 September 2025 10:42:35 +0000 (0:00:01.455) 0:07:02.660 **** 2025-09-18 10:46:43.179739 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.179744 | orchestrator | 2025-09-18 10:46:43.179749 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm.yml] ****************************** 2025-09-18 10:46:43.179754 | orchestrator | Thursday 18 September 2025 10:42:37 +0000 (0:00:02.241) 0:07:04.902 **** 2025-09-18 10:46:43.179759 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.179763 | orchestrator | 2025-09-18 10:46:43.179768 | orchestrator | TASK [ceph-osd : Use ceph-volume to create osds] ******************************* 2025-09-18 10:46:43.179773 | orchestrator | Thursday 18 September 2025 10:42:38 +0000 (0:00:00.615) 0:07:05.517 **** 2025-09-18 10:46:43.179778 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-727b3796-a5b5-597b-af2a-93b7c6d70a12', 'data_vg': 'ceph-727b3796-a5b5-597b-af2a-93b7c6d70a12'}) 2025-09-18 10:46:43.179784 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-47a403a8-a225-5ee6-9198-c4852ee3470e', 'data_vg': 'ceph-47a403a8-a225-5ee6-9198-c4852ee3470e'}) 2025-09-18 10:46:43.179792 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7', 'data_vg': 'ceph-f9a1ff5a-5f5e-51c3-b436-b4c70a0fd2b7'}) 2025-09-18 10:46:43.179797 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f', 'data_vg': 'ceph-9692bdf8-7fc8-59c1-a3ba-06351cf9fe0f'}) 2025-09-18 10:46:43.179802 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a661e8c0-0419-5fc2-afc1-c6737c299168', 'data_vg': 'ceph-a661e8c0-0419-5fc2-afc1-c6737c299168'}) 2025-09-18 10:46:43.179807 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-7a586834-03f6-5ee9-b58c-2d4644436c0e', 'data_vg': 'ceph-7a586834-03f6-5ee9-b58c-2d4644436c0e'}) 2025-09-18 10:46:43.179812 | orchestrator | 2025-09-18 10:46:43.179817 | orchestrator | TASK [ceph-osd : Include_tasks scenarios/lvm-batch.yml] ************************ 2025-09-18 10:46:43.179821 | orchestrator | Thursday 18 September 2025 10:43:21 +0000 (0:00:43.406) 0:07:48.924 **** 2025-09-18 10:46:43.179826 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.179831 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.179836 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.179841 | orchestrator | 2025-09-18 10:46:43.179845 | orchestrator | TASK [ceph-osd : Include_tasks start_osds.yml] ********************************* 2025-09-18 10:46:43.179850 | orchestrator | Thursday 18 September 2025 10:43:22 +0000 (0:00:00.462) 0:07:49.387 **** 2025-09-18 10:46:43.179855 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.179860 | orchestrator | 2025-09-18 10:46:43.179865 | orchestrator | TASK [ceph-osd : Get osd ids] ************************************************** 2025-09-18 10:46:43.179869 | orchestrator | Thursday 18 September 2025 10:43:22 +0000 (0:00:00.475) 0:07:49.862 **** 2025-09-18 10:46:43.179874 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179879 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179884 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179889 | orchestrator | 2025-09-18 10:46:43.179893 | orchestrator | TASK [ceph-osd : Collect osd ids] ********************************************** 2025-09-18 10:46:43.179898 | orchestrator | Thursday 18 September 2025 10:43:23 +0000 (0:00:00.593) 0:07:50.456 **** 2025-09-18 10:46:43.179903 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.179908 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.179913 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.179921 | orchestrator | 2025-09-18 10:46:43.179926 | orchestrator | TASK [ceph-osd : Include_tasks systemd.yml] ************************************ 2025-09-18 10:46:43.179931 | orchestrator | Thursday 18 September 2025 10:43:26 +0000 (0:00:02.865) 0:07:53.322 **** 2025-09-18 10:46:43.179936 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-5, testbed-node-4 2025-09-18 10:46:43.179941 | orchestrator | 2025-09-18 10:46:43.179945 | orchestrator | TASK [ceph-osd : Generate systemd unit file] *********************************** 2025-09-18 10:46:43.179950 | orchestrator | Thursday 18 September 2025 10:43:26 +0000 (0:00:00.498) 0:07:53.821 **** 2025-09-18 10:46:43.179955 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.179960 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.179965 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.179969 | orchestrator | 2025-09-18 10:46:43.179974 | orchestrator | TASK [ceph-osd : Generate systemd ceph-osd target file] ************************ 2025-09-18 10:46:43.179979 | orchestrator | Thursday 18 September 2025 10:43:27 +0000 (0:00:01.174) 0:07:54.996 **** 2025-09-18 10:46:43.179984 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.179988 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.179993 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.179998 | orchestrator | 2025-09-18 10:46:43.180003 | orchestrator | TASK [ceph-osd : Enable ceph-osd.target] *************************************** 2025-09-18 10:46:43.180008 | orchestrator | Thursday 18 September 2025 10:43:29 +0000 (0:00:01.471) 0:07:56.467 **** 2025-09-18 10:46:43.180012 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.180017 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.180022 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.180027 | orchestrator | 2025-09-18 10:46:43.180031 | orchestrator | TASK [ceph-osd : Ensure systemd service override directory exists] ************* 2025-09-18 10:46:43.180036 | orchestrator | Thursday 18 September 2025 10:43:31 +0000 (0:00:01.811) 0:07:58.279 **** 2025-09-18 10:46:43.180041 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180046 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180051 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180055 | orchestrator | 2025-09-18 10:46:43.180060 | orchestrator | TASK [ceph-osd : Add ceph-osd systemd service overrides] *********************** 2025-09-18 10:46:43.180065 | orchestrator | Thursday 18 September 2025 10:43:31 +0000 (0:00:00.326) 0:07:58.606 **** 2025-09-18 10:46:43.180070 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180077 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180082 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180087 | orchestrator | 2025-09-18 10:46:43.180091 | orchestrator | TASK [ceph-osd : Ensure /var/lib/ceph/osd/- is present] ********* 2025-09-18 10:46:43.180096 | orchestrator | Thursday 18 September 2025 10:43:31 +0000 (0:00:00.337) 0:07:58.943 **** 2025-09-18 10:46:43.180101 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 10:46:43.180106 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-09-18 10:46:43.180111 | orchestrator | ok: [testbed-node-5] => (item=4) 2025-09-18 10:46:43.180115 | orchestrator | ok: [testbed-node-3] => (item=3) 2025-09-18 10:46:43.180120 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-09-18 10:46:43.180125 | orchestrator | ok: [testbed-node-5] => (item=2) 2025-09-18 10:46:43.180130 | orchestrator | 2025-09-18 10:46:43.180134 | orchestrator | TASK [ceph-osd : Write run file in /var/lib/ceph/osd/xxxx/run] ***************** 2025-09-18 10:46:43.180139 | orchestrator | Thursday 18 September 2025 10:43:33 +0000 (0:00:01.345) 0:08:00.289 **** 2025-09-18 10:46:43.180144 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-18 10:46:43.180149 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-18 10:46:43.180157 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-18 10:46:43.180161 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-18 10:46:43.180166 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-18 10:46:43.180171 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-18 10:46:43.180176 | orchestrator | 2025-09-18 10:46:43.180184 | orchestrator | TASK [ceph-osd : Systemd start osd] ******************************************** 2025-09-18 10:46:43.180189 | orchestrator | Thursday 18 September 2025 10:43:35 +0000 (0:00:02.263) 0:08:02.553 **** 2025-09-18 10:46:43.180194 | orchestrator | changed: [testbed-node-3] => (item=0) 2025-09-18 10:46:43.180199 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-09-18 10:46:43.180204 | orchestrator | changed: [testbed-node-5] => (item=4) 2025-09-18 10:46:43.180208 | orchestrator | changed: [testbed-node-3] => (item=3) 2025-09-18 10:46:43.180213 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-09-18 10:46:43.180218 | orchestrator | changed: [testbed-node-5] => (item=2) 2025-09-18 10:46:43.180223 | orchestrator | 2025-09-18 10:46:43.180227 | orchestrator | TASK [ceph-osd : Unset noup flag] ********************************************** 2025-09-18 10:46:43.180232 | orchestrator | Thursday 18 September 2025 10:43:38 +0000 (0:00:03.507) 0:08:06.060 **** 2025-09-18 10:46:43.180237 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180242 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180247 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.180251 | orchestrator | 2025-09-18 10:46:43.180256 | orchestrator | TASK [ceph-osd : Wait for all osd to be up] ************************************ 2025-09-18 10:46:43.180261 | orchestrator | Thursday 18 September 2025 10:43:41 +0000 (0:00:02.768) 0:08:08.829 **** 2025-09-18 10:46:43.180266 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180271 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180275 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: Wait for all osd to be up (60 retries left). 2025-09-18 10:46:43.180280 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.180285 | orchestrator | 2025-09-18 10:46:43.180290 | orchestrator | TASK [ceph-osd : Include crush_rules.yml] ************************************** 2025-09-18 10:46:43.180295 | orchestrator | Thursday 18 September 2025 10:43:54 +0000 (0:00:13.145) 0:08:21.975 **** 2025-09-18 10:46:43.180299 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180304 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180309 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180314 | orchestrator | 2025-09-18 10:46:43.180319 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.180323 | orchestrator | Thursday 18 September 2025 10:43:55 +0000 (0:00:00.859) 0:08:22.835 **** 2025-09-18 10:46:43.180328 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180333 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180338 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180342 | orchestrator | 2025-09-18 10:46:43.180347 | orchestrator | RUNNING HANDLER [ceph-handler : Osds handler] ********************************** 2025-09-18 10:46:43.180352 | orchestrator | Thursday 18 September 2025 10:43:56 +0000 (0:00:00.579) 0:08:23.414 **** 2025-09-18 10:46:43.180357 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.180362 | orchestrator | 2025-09-18 10:46:43.180366 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact trigger_restart] ********************** 2025-09-18 10:46:43.180371 | orchestrator | Thursday 18 September 2025 10:43:56 +0000 (0:00:00.523) 0:08:23.938 **** 2025-09-18 10:46:43.180376 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.180381 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.180386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.180391 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180395 | orchestrator | 2025-09-18 10:46:43.180400 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called before restart] ******** 2025-09-18 10:46:43.180405 | orchestrator | Thursday 18 September 2025 10:43:57 +0000 (0:00:00.378) 0:08:24.317 **** 2025-09-18 10:46:43.180410 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180415 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180419 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180428 | orchestrator | 2025-09-18 10:46:43.180432 | orchestrator | RUNNING HANDLER [ceph-handler : Unset noup flag] ******************************* 2025-09-18 10:46:43.180437 | orchestrator | Thursday 18 September 2025 10:43:57 +0000 (0:00:00.310) 0:08:24.627 **** 2025-09-18 10:46:43.180442 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180447 | orchestrator | 2025-09-18 10:46:43.180452 | orchestrator | RUNNING HANDLER [ceph-handler : Copy osd restart script] *********************** 2025-09-18 10:46:43.180456 | orchestrator | Thursday 18 September 2025 10:43:57 +0000 (0:00:00.215) 0:08:24.843 **** 2025-09-18 10:46:43.180461 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180466 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180473 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180478 | orchestrator | 2025-09-18 10:46:43.180483 | orchestrator | RUNNING HANDLER [ceph-handler : Get pool list] ********************************* 2025-09-18 10:46:43.180488 | orchestrator | Thursday 18 September 2025 10:43:58 +0000 (0:00:00.624) 0:08:25.467 **** 2025-09-18 10:46:43.180493 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180497 | orchestrator | 2025-09-18 10:46:43.180502 | orchestrator | RUNNING HANDLER [ceph-handler : Get balancer module status] ******************** 2025-09-18 10:46:43.180507 | orchestrator | Thursday 18 September 2025 10:43:58 +0000 (0:00:00.243) 0:08:25.711 **** 2025-09-18 10:46:43.180512 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180517 | orchestrator | 2025-09-18 10:46:43.180522 | orchestrator | RUNNING HANDLER [ceph-handler : Set_fact pools_pgautoscaler_mode] ************** 2025-09-18 10:46:43.180526 | orchestrator | Thursday 18 September 2025 10:43:58 +0000 (0:00:00.222) 0:08:25.934 **** 2025-09-18 10:46:43.180531 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180536 | orchestrator | 2025-09-18 10:46:43.180541 | orchestrator | RUNNING HANDLER [ceph-handler : Disable balancer] ****************************** 2025-09-18 10:46:43.180545 | orchestrator | Thursday 18 September 2025 10:43:58 +0000 (0:00:00.126) 0:08:26.060 **** 2025-09-18 10:46:43.180553 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180558 | orchestrator | 2025-09-18 10:46:43.180563 | orchestrator | RUNNING HANDLER [ceph-handler : Disable pg autoscale on pools] ***************** 2025-09-18 10:46:43.180567 | orchestrator | Thursday 18 September 2025 10:43:59 +0000 (0:00:00.216) 0:08:26.276 **** 2025-09-18 10:46:43.180572 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180577 | orchestrator | 2025-09-18 10:46:43.180582 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph osds daemon(s)] ******************* 2025-09-18 10:46:43.180587 | orchestrator | Thursday 18 September 2025 10:43:59 +0000 (0:00:00.217) 0:08:26.494 **** 2025-09-18 10:46:43.180592 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.180596 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.180601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.180606 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180611 | orchestrator | 2025-09-18 10:46:43.180616 | orchestrator | RUNNING HANDLER [ceph-handler : Set _osd_handler_called after restart] ********* 2025-09-18 10:46:43.180621 | orchestrator | Thursday 18 September 2025 10:43:59 +0000 (0:00:00.379) 0:08:26.873 **** 2025-09-18 10:46:43.180625 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180630 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180635 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180640 | orchestrator | 2025-09-18 10:46:43.180671 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable pg autoscale on pools] *************** 2025-09-18 10:46:43.180676 | orchestrator | Thursday 18 September 2025 10:44:00 +0000 (0:00:00.326) 0:08:27.199 **** 2025-09-18 10:46:43.180681 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180686 | orchestrator | 2025-09-18 10:46:43.180691 | orchestrator | RUNNING HANDLER [ceph-handler : Re-enable balancer] **************************** 2025-09-18 10:46:43.180696 | orchestrator | Thursday 18 September 2025 10:44:01 +0000 (0:00:00.896) 0:08:28.096 **** 2025-09-18 10:46:43.180700 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180705 | orchestrator | 2025-09-18 10:46:43.180714 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-09-18 10:46:43.180719 | orchestrator | 2025-09-18 10:46:43.180724 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.180729 | orchestrator | Thursday 18 September 2025 10:44:01 +0000 (0:00:00.653) 0:08:28.750 **** 2025-09-18 10:46:43.180734 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.180739 | orchestrator | 2025-09-18 10:46:43.180744 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.180748 | orchestrator | Thursday 18 September 2025 10:44:03 +0000 (0:00:01.376) 0:08:30.127 **** 2025-09-18 10:46:43.180753 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.180758 | orchestrator | 2025-09-18 10:46:43.180763 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.180768 | orchestrator | Thursday 18 September 2025 10:44:04 +0000 (0:00:01.250) 0:08:31.377 **** 2025-09-18 10:46:43.180772 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180777 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180782 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180787 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.180792 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.180796 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.180801 | orchestrator | 2025-09-18 10:46:43.180806 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.180811 | orchestrator | Thursday 18 September 2025 10:44:05 +0000 (0:00:01.321) 0:08:32.699 **** 2025-09-18 10:46:43.180816 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.180820 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.180825 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.180830 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.180835 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.180840 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.180844 | orchestrator | 2025-09-18 10:46:43.180849 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.180854 | orchestrator | Thursday 18 September 2025 10:44:06 +0000 (0:00:00.792) 0:08:33.491 **** 2025-09-18 10:46:43.180859 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.180864 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.180869 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.180873 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.180878 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.180883 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.180888 | orchestrator | 2025-09-18 10:46:43.180893 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.180900 | orchestrator | Thursday 18 September 2025 10:44:07 +0000 (0:00:01.014) 0:08:34.506 **** 2025-09-18 10:46:43.180905 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.180910 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.180915 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.180920 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.180924 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.180929 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.180934 | orchestrator | 2025-09-18 10:46:43.180939 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.180944 | orchestrator | Thursday 18 September 2025 10:44:08 +0000 (0:00:00.730) 0:08:35.236 **** 2025-09-18 10:46:43.180949 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.180953 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.180958 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.180963 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.180971 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.180976 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.180981 | orchestrator | 2025-09-18 10:46:43.180986 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.180993 | orchestrator | Thursday 18 September 2025 10:44:09 +0000 (0:00:01.080) 0:08:36.317 **** 2025-09-18 10:46:43.180998 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181003 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181008 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181013 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181018 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181022 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181027 | orchestrator | 2025-09-18 10:46:43.181032 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.181037 | orchestrator | Thursday 18 September 2025 10:44:10 +0000 (0:00:00.886) 0:08:37.204 **** 2025-09-18 10:46:43.181042 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181047 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181051 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181056 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181061 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181065 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181070 | orchestrator | 2025-09-18 10:46:43.181075 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.181080 | orchestrator | Thursday 18 September 2025 10:44:10 +0000 (0:00:00.632) 0:08:37.837 **** 2025-09-18 10:46:43.181085 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181090 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181094 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181099 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181104 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181109 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181113 | orchestrator | 2025-09-18 10:46:43.181118 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.181125 | orchestrator | Thursday 18 September 2025 10:44:12 +0000 (0:00:01.364) 0:08:39.201 **** 2025-09-18 10:46:43.181133 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181140 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181147 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181155 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181163 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181169 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181174 | orchestrator | 2025-09-18 10:46:43.181179 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.181183 | orchestrator | Thursday 18 September 2025 10:44:13 +0000 (0:00:00.980) 0:08:40.182 **** 2025-09-18 10:46:43.181188 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181193 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181198 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181202 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181207 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181211 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181216 | orchestrator | 2025-09-18 10:46:43.181220 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.181225 | orchestrator | Thursday 18 September 2025 10:44:14 +0000 (0:00:00.948) 0:08:41.131 **** 2025-09-18 10:46:43.181230 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181234 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181239 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181243 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181248 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181252 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181257 | orchestrator | 2025-09-18 10:46:43.181261 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.181266 | orchestrator | Thursday 18 September 2025 10:44:14 +0000 (0:00:00.764) 0:08:41.895 **** 2025-09-18 10:46:43.181274 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181279 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181283 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181288 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181292 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181297 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181301 | orchestrator | 2025-09-18 10:46:43.181306 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.181311 | orchestrator | Thursday 18 September 2025 10:44:15 +0000 (0:00:00.931) 0:08:42.827 **** 2025-09-18 10:46:43.181315 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181320 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181324 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181329 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181333 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181338 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181342 | orchestrator | 2025-09-18 10:46:43.181347 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.181351 | orchestrator | Thursday 18 September 2025 10:44:16 +0000 (0:00:00.647) 0:08:43.474 **** 2025-09-18 10:46:43.181356 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181360 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181365 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181370 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181374 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181379 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181383 | orchestrator | 2025-09-18 10:46:43.181388 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.181397 | orchestrator | Thursday 18 September 2025 10:44:17 +0000 (0:00:00.888) 0:08:44.363 **** 2025-09-18 10:46:43.181401 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181406 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181411 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181415 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181420 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181424 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181429 | orchestrator | 2025-09-18 10:46:43.181433 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.181438 | orchestrator | Thursday 18 September 2025 10:44:17 +0000 (0:00:00.625) 0:08:44.988 **** 2025-09-18 10:46:43.181443 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181447 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181452 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181456 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:46:43.181460 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:46:43.181465 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:46:43.181469 | orchestrator | 2025-09-18 10:46:43.181474 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.181482 | orchestrator | Thursday 18 September 2025 10:44:18 +0000 (0:00:00.898) 0:08:45.887 **** 2025-09-18 10:46:43.181486 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.181491 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.181496 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.181500 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181505 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181509 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181514 | orchestrator | 2025-09-18 10:46:43.181518 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.181523 | orchestrator | Thursday 18 September 2025 10:44:19 +0000 (0:00:00.631) 0:08:46.518 **** 2025-09-18 10:46:43.181527 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181532 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181536 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181544 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181549 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181553 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181558 | orchestrator | 2025-09-18 10:46:43.181562 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.181567 | orchestrator | Thursday 18 September 2025 10:44:20 +0000 (0:00:00.913) 0:08:47.432 **** 2025-09-18 10:46:43.181571 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181576 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181580 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181585 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181589 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181594 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181598 | orchestrator | 2025-09-18 10:46:43.181603 | orchestrator | TASK [ceph-crash : Create client.crash keyring] ******************************** 2025-09-18 10:46:43.181607 | orchestrator | Thursday 18 September 2025 10:44:21 +0000 (0:00:01.307) 0:08:48.739 **** 2025-09-18 10:46:43.181612 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.181616 | orchestrator | 2025-09-18 10:46:43.181621 | orchestrator | TASK [ceph-crash : Get keys from monitors] ************************************* 2025-09-18 10:46:43.181625 | orchestrator | Thursday 18 September 2025 10:44:25 +0000 (0:00:03.922) 0:08:52.661 **** 2025-09-18 10:46:43.181630 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.181634 | orchestrator | 2025-09-18 10:46:43.181639 | orchestrator | TASK [ceph-crash : Copy ceph key(s) if needed] ********************************* 2025-09-18 10:46:43.181653 | orchestrator | Thursday 18 September 2025 10:44:27 +0000 (0:00:01.999) 0:08:54.660 **** 2025-09-18 10:46:43.181658 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.181662 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.181667 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.181671 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181676 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.181680 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.181685 | orchestrator | 2025-09-18 10:46:43.181689 | orchestrator | TASK [ceph-crash : Create /var/lib/ceph/crash/posted] ************************** 2025-09-18 10:46:43.181694 | orchestrator | Thursday 18 September 2025 10:44:29 +0000 (0:00:01.546) 0:08:56.206 **** 2025-09-18 10:46:43.181698 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.181703 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.181707 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.181712 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.181716 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.181721 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.181725 | orchestrator | 2025-09-18 10:46:43.181730 | orchestrator | TASK [ceph-crash : Include_tasks systemd.yml] ********************************** 2025-09-18 10:46:43.181734 | orchestrator | Thursday 18 September 2025 10:44:30 +0000 (0:00:01.344) 0:08:57.551 **** 2025-09-18 10:46:43.181739 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:46:43.181744 | orchestrator | 2025-09-18 10:46:43.181748 | orchestrator | TASK [ceph-crash : Generate systemd unit file for ceph-crash container] ******** 2025-09-18 10:46:43.181752 | orchestrator | Thursday 18 September 2025 10:44:31 +0000 (0:00:01.337) 0:08:58.889 **** 2025-09-18 10:46:43.181757 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.181762 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.181766 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.181771 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.181775 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.181779 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.181784 | orchestrator | 2025-09-18 10:46:43.181788 | orchestrator | TASK [ceph-crash : Start the ceph-crash service] ******************************* 2025-09-18 10:46:43.181793 | orchestrator | Thursday 18 September 2025 10:44:33 +0000 (0:00:01.664) 0:09:00.554 **** 2025-09-18 10:46:43.181801 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.181805 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.181810 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.181814 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.181819 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.181826 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.181831 | orchestrator | 2025-09-18 10:46:43.181835 | orchestrator | RUNNING HANDLER [ceph-handler : Ceph crash handler] **************************** 2025-09-18 10:46:43.181840 | orchestrator | Thursday 18 September 2025 10:44:37 +0000 (0:00:03.736) 0:09:04.290 **** 2025-09-18 10:46:43.181845 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-3, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-4, testbed-node-2 2025-09-18 10:46:43.181850 | orchestrator | 2025-09-18 10:46:43.181854 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called before restart] ****** 2025-09-18 10:46:43.181859 | orchestrator | Thursday 18 September 2025 10:44:38 +0000 (0:00:01.531) 0:09:05.822 **** 2025-09-18 10:46:43.181863 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181868 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181872 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181877 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181881 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181886 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181890 | orchestrator | 2025-09-18 10:46:43.181895 | orchestrator | RUNNING HANDLER [ceph-handler : Restart the ceph-crash service] **************** 2025-09-18 10:46:43.181902 | orchestrator | Thursday 18 September 2025 10:44:39 +0000 (0:00:00.610) 0:09:06.432 **** 2025-09-18 10:46:43.181907 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.181911 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.181916 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.181920 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:46:43.181925 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:46:43.181930 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:46:43.181934 | orchestrator | 2025-09-18 10:46:43.181939 | orchestrator | RUNNING HANDLER [ceph-handler : Set _crash_handler_called after restart] ******* 2025-09-18 10:46:43.181943 | orchestrator | Thursday 18 September 2025 10:44:41 +0000 (0:00:02.384) 0:09:08.817 **** 2025-09-18 10:46:43.181948 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.181953 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.181957 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.181962 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:46:43.181966 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:46:43.181971 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:46:43.181975 | orchestrator | 2025-09-18 10:46:43.181980 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-09-18 10:46:43.181984 | orchestrator | 2025-09-18 10:46:43.181989 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.181994 | orchestrator | Thursday 18 September 2025 10:44:42 +0000 (0:00:00.805) 0:09:09.623 **** 2025-09-18 10:46:43.181998 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.182003 | orchestrator | 2025-09-18 10:46:43.182007 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.182012 | orchestrator | Thursday 18 September 2025 10:44:43 +0000 (0:00:00.628) 0:09:10.252 **** 2025-09-18 10:46:43.182039 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.182044 | orchestrator | 2025-09-18 10:46:43.182049 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.182054 | orchestrator | Thursday 18 September 2025 10:44:43 +0000 (0:00:00.448) 0:09:10.700 **** 2025-09-18 10:46:43.182058 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182063 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182071 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182075 | orchestrator | 2025-09-18 10:46:43.182080 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.182084 | orchestrator | Thursday 18 September 2025 10:44:44 +0000 (0:00:00.421) 0:09:11.122 **** 2025-09-18 10:46:43.182089 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182093 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182098 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182103 | orchestrator | 2025-09-18 10:46:43.182107 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.182112 | orchestrator | Thursday 18 September 2025 10:44:44 +0000 (0:00:00.653) 0:09:11.776 **** 2025-09-18 10:46:43.182116 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182121 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182125 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182130 | orchestrator | 2025-09-18 10:46:43.182134 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.182139 | orchestrator | Thursday 18 September 2025 10:44:45 +0000 (0:00:00.696) 0:09:12.472 **** 2025-09-18 10:46:43.182144 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182148 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182153 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182157 | orchestrator | 2025-09-18 10:46:43.182162 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.182166 | orchestrator | Thursday 18 September 2025 10:44:46 +0000 (0:00:00.802) 0:09:13.275 **** 2025-09-18 10:46:43.182171 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182175 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182180 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182184 | orchestrator | 2025-09-18 10:46:43.182189 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.182194 | orchestrator | Thursday 18 September 2025 10:44:46 +0000 (0:00:00.662) 0:09:13.937 **** 2025-09-18 10:46:43.182198 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182203 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182207 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182212 | orchestrator | 2025-09-18 10:46:43.182216 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.182221 | orchestrator | Thursday 18 September 2025 10:44:47 +0000 (0:00:00.361) 0:09:14.299 **** 2025-09-18 10:46:43.182225 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182230 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182234 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182239 | orchestrator | 2025-09-18 10:46:43.182244 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.182251 | orchestrator | Thursday 18 September 2025 10:44:47 +0000 (0:00:00.350) 0:09:14.649 **** 2025-09-18 10:46:43.182255 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182260 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182265 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182269 | orchestrator | 2025-09-18 10:46:43.182274 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.182278 | orchestrator | Thursday 18 September 2025 10:44:48 +0000 (0:00:00.745) 0:09:15.394 **** 2025-09-18 10:46:43.182283 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182287 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182292 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182296 | orchestrator | 2025-09-18 10:46:43.182301 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.182305 | orchestrator | Thursday 18 September 2025 10:44:49 +0000 (0:00:01.164) 0:09:16.560 **** 2025-09-18 10:46:43.182310 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182315 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182319 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182324 | orchestrator | 2025-09-18 10:46:43.182332 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.182339 | orchestrator | Thursday 18 September 2025 10:44:49 +0000 (0:00:00.311) 0:09:16.871 **** 2025-09-18 10:46:43.182344 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182348 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182353 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182357 | orchestrator | 2025-09-18 10:46:43.182362 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.182366 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.325) 0:09:17.197 **** 2025-09-18 10:46:43.182371 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182376 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182380 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182385 | orchestrator | 2025-09-18 10:46:43.182389 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.182394 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.299) 0:09:17.496 **** 2025-09-18 10:46:43.182398 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182403 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182408 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182412 | orchestrator | 2025-09-18 10:46:43.182417 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.182421 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.463) 0:09:17.959 **** 2025-09-18 10:46:43.182426 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182430 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182435 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182439 | orchestrator | 2025-09-18 10:46:43.182444 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.182449 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:00.284) 0:09:18.243 **** 2025-09-18 10:46:43.182453 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182458 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182462 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182467 | orchestrator | 2025-09-18 10:46:43.182471 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.182476 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:00.318) 0:09:18.562 **** 2025-09-18 10:46:43.182480 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182485 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182490 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182494 | orchestrator | 2025-09-18 10:46:43.182499 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.182503 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:00.251) 0:09:18.814 **** 2025-09-18 10:46:43.182508 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182513 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182517 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182522 | orchestrator | 2025-09-18 10:46:43.182526 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.182531 | orchestrator | Thursday 18 September 2025 10:44:52 +0000 (0:00:00.446) 0:09:19.260 **** 2025-09-18 10:46:43.182535 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182540 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182544 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182549 | orchestrator | 2025-09-18 10:46:43.182554 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.182558 | orchestrator | Thursday 18 September 2025 10:44:52 +0000 (0:00:00.293) 0:09:19.554 **** 2025-09-18 10:46:43.182563 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.182567 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.182572 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.182576 | orchestrator | 2025-09-18 10:46:43.182581 | orchestrator | TASK [ceph-mds : Include create_mds_filesystems.yml] *************************** 2025-09-18 10:46:43.182586 | orchestrator | Thursday 18 September 2025 10:44:52 +0000 (0:00:00.489) 0:09:20.043 **** 2025-09-18 10:46:43.182594 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182598 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182603 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-09-18 10:46:43.182607 | orchestrator | 2025-09-18 10:46:43.182612 | orchestrator | TASK [ceph-facts : Get current default crush rule details] ********************* 2025-09-18 10:46:43.182616 | orchestrator | Thursday 18 September 2025 10:44:53 +0000 (0:00:00.593) 0:09:20.637 **** 2025-09-18 10:46:43.182621 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.182625 | orchestrator | 2025-09-18 10:46:43.182630 | orchestrator | TASK [ceph-facts : Get current default crush rule name] ************************ 2025-09-18 10:46:43.182635 | orchestrator | Thursday 18 September 2025 10:44:55 +0000 (0:00:02.155) 0:09:22.792 **** 2025-09-18 10:46:43.182641 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-09-18 10:46:43.182673 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182678 | orchestrator | 2025-09-18 10:46:43.182682 | orchestrator | TASK [ceph-mds : Create filesystem pools] ************************************** 2025-09-18 10:46:43.182687 | orchestrator | Thursday 18 September 2025 10:44:55 +0000 (0:00:00.183) 0:09:22.976 **** 2025-09-18 10:46:43.182693 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:46:43.182702 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:46:43.182707 | orchestrator | 2025-09-18 10:46:43.182714 | orchestrator | TASK [ceph-mds : Create ceph filesystem] *************************************** 2025-09-18 10:46:43.182719 | orchestrator | Thursday 18 September 2025 10:45:04 +0000 (0:00:08.136) 0:09:31.112 **** 2025-09-18 10:46:43.182723 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 10:46:43.182728 | orchestrator | 2025-09-18 10:46:43.182733 | orchestrator | TASK [ceph-mds : Include common.yml] ******************************************* 2025-09-18 10:46:43.182737 | orchestrator | Thursday 18 September 2025 10:45:07 +0000 (0:00:03.342) 0:09:34.454 **** 2025-09-18 10:46:43.182742 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.182746 | orchestrator | 2025-09-18 10:46:43.182751 | orchestrator | TASK [ceph-mds : Create bootstrap-mds and mds directories] ********************* 2025-09-18 10:46:43.182755 | orchestrator | Thursday 18 September 2025 10:45:08 +0000 (0:00:00.851) 0:09:35.306 **** 2025-09-18 10:46:43.182760 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 10:46:43.182765 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 10:46:43.182769 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-09-18 10:46:43.182774 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-09-18 10:46:43.182778 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-09-18 10:46:43.182783 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-09-18 10:46:43.182787 | orchestrator | 2025-09-18 10:46:43.182792 | orchestrator | TASK [ceph-mds : Get keys from monitors] *************************************** 2025-09-18 10:46:43.182796 | orchestrator | Thursday 18 September 2025 10:45:09 +0000 (0:00:01.027) 0:09:36.333 **** 2025-09-18 10:46:43.182801 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.182809 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.182814 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.182819 | orchestrator | 2025-09-18 10:46:43.182823 | orchestrator | TASK [ceph-mds : Copy ceph key(s) if needed] *********************************** 2025-09-18 10:46:43.182828 | orchestrator | Thursday 18 September 2025 10:45:11 +0000 (0:00:02.261) 0:09:38.595 **** 2025-09-18 10:46:43.182832 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 10:46:43.182836 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.182840 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.182844 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 10:46:43.182848 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 10:46:43.182852 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.182856 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 10:46:43.182860 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 10:46:43.182865 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.182869 | orchestrator | 2025-09-18 10:46:43.182873 | orchestrator | TASK [ceph-mds : Create mds keyring] ******************************************* 2025-09-18 10:46:43.182877 | orchestrator | Thursday 18 September 2025 10:45:12 +0000 (0:00:01.207) 0:09:39.803 **** 2025-09-18 10:46:43.182881 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.182885 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.182889 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.182893 | orchestrator | 2025-09-18 10:46:43.182897 | orchestrator | TASK [ceph-mds : Non_containerized.yml] **************************************** 2025-09-18 10:46:43.182901 | orchestrator | Thursday 18 September 2025 10:45:15 +0000 (0:00:02.731) 0:09:42.534 **** 2025-09-18 10:46:43.182906 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.182910 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.182914 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.182918 | orchestrator | 2025-09-18 10:46:43.182922 | orchestrator | TASK [ceph-mds : Containerized.yml] ******************************************** 2025-09-18 10:46:43.182926 | orchestrator | Thursday 18 September 2025 10:45:16 +0000 (0:00:00.597) 0:09:43.132 **** 2025-09-18 10:46:43.182930 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.182934 | orchestrator | 2025-09-18 10:46:43.182938 | orchestrator | TASK [ceph-mds : Include_tasks systemd.yml] ************************************ 2025-09-18 10:46:43.182943 | orchestrator | Thursday 18 September 2025 10:45:16 +0000 (0:00:00.558) 0:09:43.691 **** 2025-09-18 10:46:43.182947 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.182951 | orchestrator | 2025-09-18 10:46:43.182955 | orchestrator | TASK [ceph-mds : Generate systemd unit file] *********************************** 2025-09-18 10:46:43.182961 | orchestrator | Thursday 18 September 2025 10:45:17 +0000 (0:00:00.809) 0:09:44.500 **** 2025-09-18 10:46:43.182965 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.182970 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.182974 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.182978 | orchestrator | 2025-09-18 10:46:43.182982 | orchestrator | TASK [ceph-mds : Generate systemd ceph-mds target file] ************************ 2025-09-18 10:46:43.182986 | orchestrator | Thursday 18 September 2025 10:45:18 +0000 (0:00:01.258) 0:09:45.759 **** 2025-09-18 10:46:43.182990 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.182994 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.182998 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183002 | orchestrator | 2025-09-18 10:46:43.183006 | orchestrator | TASK [ceph-mds : Enable ceph-mds.target] *************************************** 2025-09-18 10:46:43.183010 | orchestrator | Thursday 18 September 2025 10:45:19 +0000 (0:00:01.206) 0:09:46.965 **** 2025-09-18 10:46:43.183015 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.183019 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.183026 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183030 | orchestrator | 2025-09-18 10:46:43.183034 | orchestrator | TASK [ceph-mds : Systemd start mds container] ********************************** 2025-09-18 10:46:43.183041 | orchestrator | Thursday 18 September 2025 10:45:21 +0000 (0:00:01.743) 0:09:48.709 **** 2025-09-18 10:46:43.183045 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.183049 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.183053 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183057 | orchestrator | 2025-09-18 10:46:43.183061 | orchestrator | TASK [ceph-mds : Wait for mds socket to exist] ********************************* 2025-09-18 10:46:43.183066 | orchestrator | Thursday 18 September 2025 10:45:23 +0000 (0:00:02.294) 0:09:51.003 **** 2025-09-18 10:46:43.183070 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183074 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183078 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183082 | orchestrator | 2025-09-18 10:46:43.183086 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.183090 | orchestrator | Thursday 18 September 2025 10:45:25 +0000 (0:00:01.227) 0:09:52.231 **** 2025-09-18 10:46:43.183094 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.183099 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.183103 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183107 | orchestrator | 2025-09-18 10:46:43.183111 | orchestrator | RUNNING HANDLER [ceph-handler : Mdss handler] ********************************** 2025-09-18 10:46:43.183115 | orchestrator | Thursday 18 September 2025 10:45:26 +0000 (0:00:00.981) 0:09:53.213 **** 2025-09-18 10:46:43.183119 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.183123 | orchestrator | 2025-09-18 10:46:43.183128 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called before restart] ******** 2025-09-18 10:46:43.183132 | orchestrator | Thursday 18 September 2025 10:45:26 +0000 (0:00:00.540) 0:09:53.753 **** 2025-09-18 10:46:43.183136 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183140 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183144 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183148 | orchestrator | 2025-09-18 10:46:43.183152 | orchestrator | RUNNING HANDLER [ceph-handler : Copy mds restart script] *********************** 2025-09-18 10:46:43.183156 | orchestrator | Thursday 18 September 2025 10:45:26 +0000 (0:00:00.324) 0:09:54.077 **** 2025-09-18 10:46:43.183161 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.183165 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.183169 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183173 | orchestrator | 2025-09-18 10:46:43.183177 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph mds daemon(s)] ******************** 2025-09-18 10:46:43.183181 | orchestrator | Thursday 18 September 2025 10:45:28 +0000 (0:00:01.551) 0:09:55.629 **** 2025-09-18 10:46:43.183185 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.183189 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.183194 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.183198 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183202 | orchestrator | 2025-09-18 10:46:43.183206 | orchestrator | RUNNING HANDLER [ceph-handler : Set _mds_handler_called after restart] ********* 2025-09-18 10:46:43.183210 | orchestrator | Thursday 18 September 2025 10:45:29 +0000 (0:00:00.638) 0:09:56.267 **** 2025-09-18 10:46:43.183214 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183218 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183222 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183227 | orchestrator | 2025-09-18 10:46:43.183231 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-18 10:46:43.183235 | orchestrator | 2025-09-18 10:46:43.183239 | orchestrator | TASK [ceph-handler : Include check_running_cluster.yml] ************************ 2025-09-18 10:46:43.183243 | orchestrator | Thursday 18 September 2025 10:45:29 +0000 (0:00:00.541) 0:09:56.809 **** 2025-09-18 10:46:43.183252 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_cluster.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.183256 | orchestrator | 2025-09-18 10:46:43.183260 | orchestrator | TASK [ceph-handler : Include check_running_containers.yml] ********************* 2025-09-18 10:46:43.183264 | orchestrator | Thursday 18 September 2025 10:45:30 +0000 (0:00:00.861) 0:09:57.670 **** 2025-09-18 10:46:43.183268 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.183273 | orchestrator | 2025-09-18 10:46:43.183277 | orchestrator | TASK [ceph-handler : Check for a mon container] ******************************** 2025-09-18 10:46:43.183281 | orchestrator | Thursday 18 September 2025 10:45:31 +0000 (0:00:00.536) 0:09:58.206 **** 2025-09-18 10:46:43.183285 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183289 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183293 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183297 | orchestrator | 2025-09-18 10:46:43.183301 | orchestrator | TASK [ceph-handler : Check for an osd container] ******************************* 2025-09-18 10:46:43.183306 | orchestrator | Thursday 18 September 2025 10:45:31 +0000 (0:00:00.438) 0:09:58.645 **** 2025-09-18 10:46:43.183310 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183317 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183321 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183325 | orchestrator | 2025-09-18 10:46:43.183329 | orchestrator | TASK [ceph-handler : Check for a mds container] ******************************** 2025-09-18 10:46:43.183334 | orchestrator | Thursday 18 September 2025 10:45:32 +0000 (0:00:00.642) 0:09:59.288 **** 2025-09-18 10:46:43.183338 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183342 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183346 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183350 | orchestrator | 2025-09-18 10:46:43.183354 | orchestrator | TASK [ceph-handler : Check for a rgw container] ******************************** 2025-09-18 10:46:43.183358 | orchestrator | Thursday 18 September 2025 10:45:32 +0000 (0:00:00.688) 0:09:59.976 **** 2025-09-18 10:46:43.183362 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183366 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183370 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183374 | orchestrator | 2025-09-18 10:46:43.183379 | orchestrator | TASK [ceph-handler : Check for a mgr container] ******************************** 2025-09-18 10:46:43.183383 | orchestrator | Thursday 18 September 2025 10:45:33 +0000 (0:00:00.668) 0:10:00.645 **** 2025-09-18 10:46:43.183387 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183393 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183398 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183402 | orchestrator | 2025-09-18 10:46:43.183406 | orchestrator | TASK [ceph-handler : Check for a rbd mirror container] ************************* 2025-09-18 10:46:43.183410 | orchestrator | Thursday 18 September 2025 10:45:33 +0000 (0:00:00.416) 0:10:01.062 **** 2025-09-18 10:46:43.183415 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183419 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183423 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183427 | orchestrator | 2025-09-18 10:46:43.183431 | orchestrator | TASK [ceph-handler : Check for a nfs container] ******************************** 2025-09-18 10:46:43.183435 | orchestrator | Thursday 18 September 2025 10:45:34 +0000 (0:00:00.269) 0:10:01.331 **** 2025-09-18 10:46:43.183439 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183444 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183448 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183452 | orchestrator | 2025-09-18 10:46:43.183456 | orchestrator | TASK [ceph-handler : Check for a ceph-crash container] ************************* 2025-09-18 10:46:43.183460 | orchestrator | Thursday 18 September 2025 10:45:34 +0000 (0:00:00.274) 0:10:01.606 **** 2025-09-18 10:46:43.183464 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183468 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183476 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183481 | orchestrator | 2025-09-18 10:46:43.183485 | orchestrator | TASK [ceph-handler : Check for a ceph-exporter container] ********************** 2025-09-18 10:46:43.183489 | orchestrator | Thursday 18 September 2025 10:45:35 +0000 (0:00:00.686) 0:10:02.292 **** 2025-09-18 10:46:43.183493 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183497 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183501 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183505 | orchestrator | 2025-09-18 10:46:43.183510 | orchestrator | TASK [ceph-handler : Include check_socket_non_container.yml] ******************* 2025-09-18 10:46:43.183514 | orchestrator | Thursday 18 September 2025 10:45:36 +0000 (0:00:00.821) 0:10:03.113 **** 2025-09-18 10:46:43.183518 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183522 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183526 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183530 | orchestrator | 2025-09-18 10:46:43.183534 | orchestrator | TASK [ceph-handler : Set_fact handler_mon_status] ****************************** 2025-09-18 10:46:43.183538 | orchestrator | Thursday 18 September 2025 10:45:36 +0000 (0:00:00.274) 0:10:03.387 **** 2025-09-18 10:46:43.183542 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183547 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183551 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183555 | orchestrator | 2025-09-18 10:46:43.183559 | orchestrator | TASK [ceph-handler : Set_fact handler_osd_status] ****************************** 2025-09-18 10:46:43.183563 | orchestrator | Thursday 18 September 2025 10:45:36 +0000 (0:00:00.279) 0:10:03.667 **** 2025-09-18 10:46:43.183567 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183572 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183576 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183580 | orchestrator | 2025-09-18 10:46:43.183584 | orchestrator | TASK [ceph-handler : Set_fact handler_mds_status] ****************************** 2025-09-18 10:46:43.183588 | orchestrator | Thursday 18 September 2025 10:45:36 +0000 (0:00:00.287) 0:10:03.954 **** 2025-09-18 10:46:43.183592 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183596 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183600 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183605 | orchestrator | 2025-09-18 10:46:43.183609 | orchestrator | TASK [ceph-handler : Set_fact handler_rgw_status] ****************************** 2025-09-18 10:46:43.183613 | orchestrator | Thursday 18 September 2025 10:45:37 +0000 (0:00:00.450) 0:10:04.404 **** 2025-09-18 10:46:43.183617 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183621 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183625 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183629 | orchestrator | 2025-09-18 10:46:43.183633 | orchestrator | TASK [ceph-handler : Set_fact handler_nfs_status] ****************************** 2025-09-18 10:46:43.183638 | orchestrator | Thursday 18 September 2025 10:45:37 +0000 (0:00:00.308) 0:10:04.713 **** 2025-09-18 10:46:43.183652 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183657 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183661 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183665 | orchestrator | 2025-09-18 10:46:43.183669 | orchestrator | TASK [ceph-handler : Set_fact handler_rbd_status] ****************************** 2025-09-18 10:46:43.183674 | orchestrator | Thursday 18 September 2025 10:45:37 +0000 (0:00:00.273) 0:10:04.986 **** 2025-09-18 10:46:43.183678 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183682 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183686 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183690 | orchestrator | 2025-09-18 10:46:43.183694 | orchestrator | TASK [ceph-handler : Set_fact handler_mgr_status] ****************************** 2025-09-18 10:46:43.183698 | orchestrator | Thursday 18 September 2025 10:45:38 +0000 (0:00:00.277) 0:10:05.263 **** 2025-09-18 10:46:43.183703 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183707 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183711 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183715 | orchestrator | 2025-09-18 10:46:43.183723 | orchestrator | TASK [ceph-handler : Set_fact handler_crash_status] **************************** 2025-09-18 10:46:43.183732 | orchestrator | Thursday 18 September 2025 10:45:38 +0000 (0:00:00.396) 0:10:05.659 **** 2025-09-18 10:46:43.183736 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183740 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183744 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183748 | orchestrator | 2025-09-18 10:46:43.183753 | orchestrator | TASK [ceph-handler : Set_fact handler_exporter_status] ************************* 2025-09-18 10:46:43.183757 | orchestrator | Thursday 18 September 2025 10:45:38 +0000 (0:00:00.311) 0:10:05.971 **** 2025-09-18 10:46:43.183761 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.183765 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.183769 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.183773 | orchestrator | 2025-09-18 10:46:43.183777 | orchestrator | TASK [ceph-rgw : Include common.yml] ******************************************* 2025-09-18 10:46:43.183781 | orchestrator | Thursday 18 September 2025 10:45:39 +0000 (0:00:00.494) 0:10:06.465 **** 2025-09-18 10:46:43.183786 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.183790 | orchestrator | 2025-09-18 10:46:43.183796 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-18 10:46:43.183801 | orchestrator | Thursday 18 September 2025 10:45:40 +0000 (0:00:00.688) 0:10:07.153 **** 2025-09-18 10:46:43.183805 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.183809 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.183813 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.183817 | orchestrator | 2025-09-18 10:46:43.183821 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-18 10:46:43.183826 | orchestrator | Thursday 18 September 2025 10:45:42 +0000 (0:00:02.244) 0:10:09.398 **** 2025-09-18 10:46:43.183830 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 10:46:43.183834 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 10:46:43.183838 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-09-18 10:46:43.183842 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.183846 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-09-18 10:46:43.183850 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.183854 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 10:46:43.183859 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-09-18 10:46:43.183863 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.183867 | orchestrator | 2025-09-18 10:46:43.183871 | orchestrator | TASK [ceph-rgw : Copy SSL certificate & key data to certificate path] ********** 2025-09-18 10:46:43.183875 | orchestrator | Thursday 18 September 2025 10:45:43 +0000 (0:00:01.134) 0:10:10.532 **** 2025-09-18 10:46:43.183879 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.183883 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.183887 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.183891 | orchestrator | 2025-09-18 10:46:43.183896 | orchestrator | TASK [ceph-rgw : Include_tasks pre_requisite.yml] ****************************** 2025-09-18 10:46:43.183900 | orchestrator | Thursday 18 September 2025 10:45:43 +0000 (0:00:00.292) 0:10:10.825 **** 2025-09-18 10:46:43.183904 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/pre_requisite.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.183908 | orchestrator | 2025-09-18 10:46:43.183912 | orchestrator | TASK [ceph-rgw : Create rados gateway directories] ***************************** 2025-09-18 10:46:43.183916 | orchestrator | Thursday 18 September 2025 10:45:44 +0000 (0:00:00.665) 0:10:11.490 **** 2025-09-18 10:46:43.183921 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.183925 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.183934 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.183938 | orchestrator | 2025-09-18 10:46:43.183943 | orchestrator | TASK [ceph-rgw : Create rgw keyrings] ****************************************** 2025-09-18 10:46:43.183947 | orchestrator | Thursday 18 September 2025 10:45:45 +0000 (0:00:00.838) 0:10:12.329 **** 2025-09-18 10:46:43.183951 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.183955 | orchestrator | changed: [testbed-node-3 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 10:46:43.183959 | orchestrator | changed: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.183963 | orchestrator | changed: [testbed-node-4 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 10:46:43.183968 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.183972 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] if groups.get(mon_group_name, []) | length > 0 else 'localhost' }}] 2025-09-18 10:46:43.183976 | orchestrator | 2025-09-18 10:46:43.183980 | orchestrator | TASK [ceph-rgw : Get keys from monitors] *************************************** 2025-09-18 10:46:43.183984 | orchestrator | Thursday 18 September 2025 10:45:50 +0000 (0:00:04.847) 0:10:17.177 **** 2025-09-18 10:46:43.183988 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.183995 | orchestrator | ok: [testbed-node-4 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.183999 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.184003 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.184007 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:46:43.184012 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:46:43.184016 | orchestrator | 2025-09-18 10:46:43.184020 | orchestrator | TASK [ceph-rgw : Copy ceph key(s) if needed] *********************************** 2025-09-18 10:46:43.184024 | orchestrator | Thursday 18 September 2025 10:45:52 +0000 (0:00:02.852) 0:10:20.030 **** 2025-09-18 10:46:43.184028 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 10:46:43.184032 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.184036 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 10:46:43.184040 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 10:46:43.184045 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.184049 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.184053 | orchestrator | 2025-09-18 10:46:43.184059 | orchestrator | TASK [ceph-rgw : Rgw pool creation tasks] ************************************** 2025-09-18 10:46:43.184064 | orchestrator | Thursday 18 September 2025 10:45:54 +0000 (0:00:01.224) 0:10:21.254 **** 2025-09-18 10:46:43.184068 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-09-18 10:46:43.184072 | orchestrator | 2025-09-18 10:46:43.184076 | orchestrator | TASK [ceph-rgw : Create ec profile] ******************************************** 2025-09-18 10:46:43.184080 | orchestrator | Thursday 18 September 2025 10:45:54 +0000 (0:00:00.225) 0:10:21.479 **** 2025-09-18 10:46:43.184085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184089 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184097 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184105 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184109 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184113 | orchestrator | 2025-09-18 10:46:43.184117 | orchestrator | TASK [ceph-rgw : Set crush rule] *********************************************** 2025-09-18 10:46:43.184122 | orchestrator | Thursday 18 September 2025 10:45:54 +0000 (0:00:00.539) 0:10:22.019 **** 2025-09-18 10:46:43.184126 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184130 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184134 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184142 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-09-18 10:46:43.184147 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184151 | orchestrator | 2025-09-18 10:46:43.184155 | orchestrator | TASK [ceph-rgw : Create rgw pools] ********************************************* 2025-09-18 10:46:43.184159 | orchestrator | Thursday 18 September 2025 10:45:55 +0000 (0:00:00.762) 0:10:22.781 **** 2025-09-18 10:46:43.184163 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 10:46:43.184168 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 10:46:43.184172 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 10:46:43.184176 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 10:46:43.184180 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-09-18 10:46:43.184184 | orchestrator | 2025-09-18 10:46:43.184188 | orchestrator | TASK [ceph-rgw : Include_tasks openstack-keystone.yml] ************************* 2025-09-18 10:46:43.184193 | orchestrator | Thursday 18 September 2025 10:46:27 +0000 (0:00:31.697) 0:10:54.479 **** 2025-09-18 10:46:43.184197 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184201 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.184205 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.184209 | orchestrator | 2025-09-18 10:46:43.184216 | orchestrator | TASK [ceph-rgw : Include_tasks start_radosgw.yml] ****************************** 2025-09-18 10:46:43.184220 | orchestrator | Thursday 18 September 2025 10:46:27 +0000 (0:00:00.312) 0:10:54.792 **** 2025-09-18 10:46:43.184224 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184228 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.184232 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.184237 | orchestrator | 2025-09-18 10:46:43.184241 | orchestrator | TASK [ceph-rgw : Include start_docker_rgw.yml] ********************************* 2025-09-18 10:46:43.184245 | orchestrator | Thursday 18 September 2025 10:46:28 +0000 (0:00:00.626) 0:10:55.418 **** 2025-09-18 10:46:43.184249 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.184253 | orchestrator | 2025-09-18 10:46:43.184260 | orchestrator | TASK [ceph-rgw : Include_task systemd.yml] ************************************* 2025-09-18 10:46:43.184264 | orchestrator | Thursday 18 September 2025 10:46:28 +0000 (0:00:00.558) 0:10:55.977 **** 2025-09-18 10:46:43.184271 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.184275 | orchestrator | 2025-09-18 10:46:43.184279 | orchestrator | TASK [ceph-rgw : Generate systemd unit file] *********************************** 2025-09-18 10:46:43.184283 | orchestrator | Thursday 18 September 2025 10:46:29 +0000 (0:00:00.789) 0:10:56.767 **** 2025-09-18 10:46:43.184287 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.184291 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.184295 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.184300 | orchestrator | 2025-09-18 10:46:43.184304 | orchestrator | TASK [ceph-rgw : Generate systemd ceph-radosgw target file] ******************** 2025-09-18 10:46:43.184308 | orchestrator | Thursday 18 September 2025 10:46:31 +0000 (0:00:01.342) 0:10:58.110 **** 2025-09-18 10:46:43.184312 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.184316 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.184320 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.184324 | orchestrator | 2025-09-18 10:46:43.184329 | orchestrator | TASK [ceph-rgw : Enable ceph-radosgw.target] *********************************** 2025-09-18 10:46:43.184333 | orchestrator | Thursday 18 September 2025 10:46:32 +0000 (0:00:01.376) 0:10:59.487 **** 2025-09-18 10:46:43.184337 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:46:43.184341 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:46:43.184345 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:46:43.184349 | orchestrator | 2025-09-18 10:46:43.184353 | orchestrator | TASK [ceph-rgw : Systemd start rgw container] ********************************** 2025-09-18 10:46:43.184357 | orchestrator | Thursday 18 September 2025 10:46:34 +0000 (0:00:01.759) 0:11:01.246 **** 2025-09-18 10:46:43.184361 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.184366 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.184370 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-09-18 10:46:43.184374 | orchestrator | 2025-09-18 10:46:43.184378 | orchestrator | RUNNING HANDLER [ceph-handler : Make tempdir for scripts] ********************** 2025-09-18 10:46:43.184382 | orchestrator | Thursday 18 September 2025 10:46:36 +0000 (0:00:02.769) 0:11:04.016 **** 2025-09-18 10:46:43.184386 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184391 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.184395 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.184399 | orchestrator | 2025-09-18 10:46:43.184403 | orchestrator | RUNNING HANDLER [ceph-handler : Rgws handler] ********************************** 2025-09-18 10:46:43.184407 | orchestrator | Thursday 18 September 2025 10:46:37 +0000 (0:00:00.368) 0:11:04.384 **** 2025-09-18 10:46:43.184411 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:46:43.184415 | orchestrator | 2025-09-18 10:46:43.184419 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called before restart] ******** 2025-09-18 10:46:43.184424 | orchestrator | Thursday 18 September 2025 10:46:38 +0000 (0:00:00.846) 0:11:05.230 **** 2025-09-18 10:46:43.184428 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.184432 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.184436 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.184440 | orchestrator | 2025-09-18 10:46:43.184444 | orchestrator | RUNNING HANDLER [ceph-handler : Copy rgw restart script] *********************** 2025-09-18 10:46:43.184448 | orchestrator | Thursday 18 September 2025 10:46:38 +0000 (0:00:00.337) 0:11:05.568 **** 2025-09-18 10:46:43.184452 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184456 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:46:43.184464 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:46:43.184468 | orchestrator | 2025-09-18 10:46:43.184473 | orchestrator | RUNNING HANDLER [ceph-handler : Restart ceph rgw daemon(s)] ******************** 2025-09-18 10:46:43.184477 | orchestrator | Thursday 18 September 2025 10:46:38 +0000 (0:00:00.351) 0:11:05.919 **** 2025-09-18 10:46:43.184481 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:46:43.184485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:46:43.184489 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:46:43.184493 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:46:43.184497 | orchestrator | 2025-09-18 10:46:43.184501 | orchestrator | RUNNING HANDLER [ceph-handler : Set _rgw_handler_called after restart] ********* 2025-09-18 10:46:43.184506 | orchestrator | Thursday 18 September 2025 10:46:40 +0000 (0:00:01.198) 0:11:07.117 **** 2025-09-18 10:46:43.184510 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:46:43.184514 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:46:43.184518 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:46:43.184522 | orchestrator | 2025-09-18 10:46:43.184526 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:46:43.184533 | orchestrator | testbed-node-0 : ok=134  changed=35  unreachable=0 failed=0 skipped=125  rescued=0 ignored=0 2025-09-18 10:46:43.184537 | orchestrator | testbed-node-1 : ok=127  changed=31  unreachable=0 failed=0 skipped=120  rescued=0 ignored=0 2025-09-18 10:46:43.184541 | orchestrator | testbed-node-2 : ok=134  changed=33  unreachable=0 failed=0 skipped=119  rescued=0 ignored=0 2025-09-18 10:46:43.184546 | orchestrator | testbed-node-3 : ok=193  changed=45  unreachable=0 failed=0 skipped=162  rescued=0 ignored=0 2025-09-18 10:46:43.184550 | orchestrator | testbed-node-4 : ok=175  changed=40  unreachable=0 failed=0 skipped=123  rescued=0 ignored=0 2025-09-18 10:46:43.184556 | orchestrator | testbed-node-5 : ok=177  changed=41  unreachable=0 failed=0 skipped=121  rescued=0 ignored=0 2025-09-18 10:46:43.184560 | orchestrator | 2025-09-18 10:46:43.184565 | orchestrator | 2025-09-18 10:46:43.184569 | orchestrator | 2025-09-18 10:46:43.184573 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:46:43.184577 | orchestrator | Thursday 18 September 2025 10:46:40 +0000 (0:00:00.278) 0:11:07.396 **** 2025-09-18 10:46:43.184581 | orchestrator | =============================================================================== 2025-09-18 10:46:43.184585 | orchestrator | ceph-container-common : Pulling Ceph container image ------------------- 45.38s 2025-09-18 10:46:43.184589 | orchestrator | ceph-osd : Use ceph-volume to create osds ------------------------------ 43.41s 2025-09-18 10:46:43.184594 | orchestrator | ceph-mgr : Wait for all mgr to be up ----------------------------------- 36.65s 2025-09-18 10:46:43.184598 | orchestrator | ceph-rgw : Create rgw pools -------------------------------------------- 31.70s 2025-09-18 10:46:43.184602 | orchestrator | ceph-mon : Waiting for the monitor(s) to form the quorum... ------------ 21.81s 2025-09-18 10:46:43.184606 | orchestrator | ceph-mon : Set cluster configs ----------------------------------------- 14.55s 2025-09-18 10:46:43.184610 | orchestrator | ceph-osd : Wait for all osd to be up ----------------------------------- 13.15s 2025-09-18 10:46:43.184614 | orchestrator | ceph-mon : Fetch ceph initial keys ------------------------------------- 10.64s 2025-09-18 10:46:43.184618 | orchestrator | ceph-mgr : Create ceph mgr keyring(s) on a mon node -------------------- 10.59s 2025-09-18 10:46:43.184622 | orchestrator | ceph-mds : Create filesystem pools -------------------------------------- 8.14s 2025-09-18 10:46:43.184627 | orchestrator | ceph-config : Create ceph initial directories --------------------------- 7.31s 2025-09-18 10:46:43.184631 | orchestrator | ceph-mgr : Disable ceph mgr enabled modules ----------------------------- 6.46s 2025-09-18 10:46:43.184638 | orchestrator | ceph-mgr : Add modules to ceph-mgr -------------------------------------- 4.86s 2025-09-18 10:46:43.184666 | orchestrator | ceph-rgw : Create rgw keyrings ------------------------------------------ 4.85s 2025-09-18 10:46:43.184670 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 4.42s 2025-09-18 10:46:43.184675 | orchestrator | ceph-crash : Create client.crash keyring -------------------------------- 3.92s 2025-09-18 10:46:43.184679 | orchestrator | ceph-crash : Start the ceph-crash service ------------------------------- 3.74s 2025-09-18 10:46:43.184683 | orchestrator | ceph-osd : Systemd start osd -------------------------------------------- 3.51s 2025-09-18 10:46:43.184687 | orchestrator | ceph-mds : Create ceph filesystem --------------------------------------- 3.34s 2025-09-18 10:46:43.184691 | orchestrator | ceph-mon : Copy admin keyring over to mons ------------------------------ 3.32s 2025-09-18 10:46:46.212412 | orchestrator | 2025-09-18 10:46:46 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:46.214113 | orchestrator | 2025-09-18 10:46:46 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:46.215631 | orchestrator | 2025-09-18 10:46:46 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:46.215943 | orchestrator | 2025-09-18 10:46:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:49.268545 | orchestrator | 2025-09-18 10:46:49 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:49.270241 | orchestrator | 2025-09-18 10:46:49 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:49.272697 | orchestrator | 2025-09-18 10:46:49 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:49.272746 | orchestrator | 2025-09-18 10:46:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:52.322152 | orchestrator | 2025-09-18 10:46:52 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:52.322745 | orchestrator | 2025-09-18 10:46:52 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:52.324235 | orchestrator | 2025-09-18 10:46:52 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:52.324285 | orchestrator | 2025-09-18 10:46:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:55.368434 | orchestrator | 2025-09-18 10:46:55 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:55.370195 | orchestrator | 2025-09-18 10:46:55 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:55.372260 | orchestrator | 2025-09-18 10:46:55 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:55.372285 | orchestrator | 2025-09-18 10:46:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:46:58.426391 | orchestrator | 2025-09-18 10:46:58 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:46:58.428225 | orchestrator | 2025-09-18 10:46:58 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:46:58.430087 | orchestrator | 2025-09-18 10:46:58 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:46:58.430181 | orchestrator | 2025-09-18 10:46:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:01.465139 | orchestrator | 2025-09-18 10:47:01 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:01.466283 | orchestrator | 2025-09-18 10:47:01 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:01.469278 | orchestrator | 2025-09-18 10:47:01 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:01.469308 | orchestrator | 2025-09-18 10:47:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:04.514858 | orchestrator | 2025-09-18 10:47:04 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:04.515512 | orchestrator | 2025-09-18 10:47:04 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:04.517100 | orchestrator | 2025-09-18 10:47:04 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:04.517131 | orchestrator | 2025-09-18 10:47:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:07.562768 | orchestrator | 2025-09-18 10:47:07 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:07.564206 | orchestrator | 2025-09-18 10:47:07 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:07.565055 | orchestrator | 2025-09-18 10:47:07 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:07.565080 | orchestrator | 2025-09-18 10:47:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:10.615284 | orchestrator | 2025-09-18 10:47:10 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:10.617105 | orchestrator | 2025-09-18 10:47:10 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:10.618663 | orchestrator | 2025-09-18 10:47:10 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:10.619106 | orchestrator | 2025-09-18 10:47:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:13.664202 | orchestrator | 2025-09-18 10:47:13 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:13.666205 | orchestrator | 2025-09-18 10:47:13 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:13.668201 | orchestrator | 2025-09-18 10:47:13 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:13.668407 | orchestrator | 2025-09-18 10:47:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:16.723425 | orchestrator | 2025-09-18 10:47:16 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:16.724295 | orchestrator | 2025-09-18 10:47:16 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:16.726464 | orchestrator | 2025-09-18 10:47:16 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:16.727153 | orchestrator | 2025-09-18 10:47:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:19.778929 | orchestrator | 2025-09-18 10:47:19 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:19.781236 | orchestrator | 2025-09-18 10:47:19 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:19.783713 | orchestrator | 2025-09-18 10:47:19 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:19.783870 | orchestrator | 2025-09-18 10:47:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:22.827405 | orchestrator | 2025-09-18 10:47:22 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:22.828901 | orchestrator | 2025-09-18 10:47:22 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state STARTED 2025-09-18 10:47:22.832066 | orchestrator | 2025-09-18 10:47:22 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:22.832139 | orchestrator | 2025-09-18 10:47:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:25.880680 | orchestrator | 2025-09-18 10:47:25 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:25.881853 | orchestrator | 2025-09-18 10:47:25 | INFO  | Task 9aff25c9-83c8-4f22-849d-691c89fa2195 is in state SUCCESS 2025-09-18 10:47:25.884159 | orchestrator | 2025-09-18 10:47:25.884196 | orchestrator | 2025-09-18 10:47:25.884209 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:47:25.884221 | orchestrator | 2025-09-18 10:47:25.884232 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:47:25.884243 | orchestrator | Thursday 18 September 2025 10:44:31 +0000 (0:00:00.258) 0:00:00.258 **** 2025-09-18 10:47:25.884254 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:25.884266 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:25.884277 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:25.884288 | orchestrator | 2025-09-18 10:47:25.884299 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:47:25.884310 | orchestrator | Thursday 18 September 2025 10:44:31 +0000 (0:00:00.298) 0:00:00.557 **** 2025-09-18 10:47:25.884322 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-09-18 10:47:25.884333 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-09-18 10:47:25.884344 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-09-18 10:47:25.884355 | orchestrator | 2025-09-18 10:47:25.884365 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-09-18 10:47:25.884376 | orchestrator | 2025-09-18 10:47:25.884387 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 10:47:25.884398 | orchestrator | Thursday 18 September 2025 10:44:32 +0000 (0:00:00.489) 0:00:01.047 **** 2025-09-18 10:47:25.884409 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:25.884420 | orchestrator | 2025-09-18 10:47:25.884431 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-09-18 10:47:25.884625 | orchestrator | Thursday 18 September 2025 10:44:32 +0000 (0:00:00.482) 0:00:01.529 **** 2025-09-18 10:47:25.884639 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:47:25.884649 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:47:25.884660 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-09-18 10:47:25.884671 | orchestrator | 2025-09-18 10:47:25.884682 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-09-18 10:47:25.884693 | orchestrator | Thursday 18 September 2025 10:44:33 +0000 (0:00:00.740) 0:00:02.269 **** 2025-09-18 10:47:25.884707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.884723 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.884791 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.884817 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.884841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.884862 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.884893 | orchestrator | 2025-09-18 10:47:25.884906 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 10:47:25.884922 | orchestrator | Thursday 18 September 2025 10:44:35 +0000 (0:00:01.852) 0:00:04.122 **** 2025-09-18 10:47:25.884934 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:25.884945 | orchestrator | 2025-09-18 10:47:25.884956 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-09-18 10:47:25.884967 | orchestrator | Thursday 18 September 2025 10:44:35 +0000 (0:00:00.537) 0:00:04.659 **** 2025-09-18 10:47:25.884989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885014 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885026 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885071 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885083 | orchestrator | 2025-09-18 10:47:25.885094 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-09-18 10:47:25.885105 | orchestrator | Thursday 18 September 2025 10:44:38 +0000 (0:00:03.028) 0:00:07.688 **** 2025-09-18 10:47:25.885117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885144 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:25.885173 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885205 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:25.885219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885233 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885253 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:25.885266 | orchestrator | 2025-09-18 10:47:25.885278 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-09-18 10:47:25.885290 | orchestrator | Thursday 18 September 2025 10:44:39 +0000 (0:00:00.805) 0:00:08.493 **** 2025-09-18 10:47:25.885307 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885328 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885341 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:25.885354 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885393 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-09-18 10:47:25.885407 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:25.885429 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-09-18 10:47:25.885443 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:25.885455 | orchestrator | 2025-09-18 10:47:25.885466 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-09-18 10:47:25.885478 | orchestrator | Thursday 18 September 2025 10:44:40 +0000 (0:00:01.052) 0:00:09.546 **** 2025-09-18 10:47:25.885489 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885507 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885524 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885543 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885575 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885610 | orchestrator | 2025-09-18 10:47:25.885621 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-09-18 10:47:25.885633 | orchestrator | Thursday 18 September 2025 10:44:43 +0000 (0:00:02.390) 0:00:11.937 **** 2025-09-18 10:47:25.885644 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.885656 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:25.885667 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:25.885678 | orchestrator | 2025-09-18 10:47:25.885689 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-09-18 10:47:25.885700 | orchestrator | Thursday 18 September 2025 10:44:46 +0000 (0:00:02.922) 0:00:14.860 **** 2025-09-18 10:47:25.885711 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.885722 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:25.885733 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:25.885744 | orchestrator | 2025-09-18 10:47:25.885755 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-09-18 10:47:25.885766 | orchestrator | Thursday 18 September 2025 10:44:48 +0000 (0:00:02.267) 0:00:17.127 **** 2025-09-18 10:47:25.885783 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885802 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885822 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/opensearch:2024.2', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-09-18 10:47:25.885835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/opensearch-dashboards:2024.2', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-09-18 10:47:25.885889 | orchestrator | 2025-09-18 10:47:25.885900 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 10:47:25.885912 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:02.206) 0:00:19.334 **** 2025-09-18 10:47:25.885923 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:25.885934 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:25.885945 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:25.885956 | orchestrator | 2025-09-18 10:47:25.885967 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 10:47:25.885978 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.276) 0:00:19.611 **** 2025-09-18 10:47:25.885989 | orchestrator | 2025-09-18 10:47:25.886000 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 10:47:25.886010 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.059) 0:00:19.670 **** 2025-09-18 10:47:25.886070 | orchestrator | 2025-09-18 10:47:25.886081 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-09-18 10:47:25.886092 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.063) 0:00:19.733 **** 2025-09-18 10:47:25.886103 | orchestrator | 2025-09-18 10:47:25.886114 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-09-18 10:47:25.886126 | orchestrator | Thursday 18 September 2025 10:44:50 +0000 (0:00:00.062) 0:00:19.796 **** 2025-09-18 10:47:25.886137 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:25.886148 | orchestrator | 2025-09-18 10:47:25.886158 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-09-18 10:47:25.886169 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:00.194) 0:00:19.991 **** 2025-09-18 10:47:25.886181 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:25.886192 | orchestrator | 2025-09-18 10:47:25.886203 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-09-18 10:47:25.886214 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:00.516) 0:00:20.507 **** 2025-09-18 10:47:25.886225 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.886236 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:25.886247 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:25.886258 | orchestrator | 2025-09-18 10:47:25.886269 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-09-18 10:47:25.886280 | orchestrator | Thursday 18 September 2025 10:45:47 +0000 (0:00:56.219) 0:01:16.727 **** 2025-09-18 10:47:25.886291 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.886301 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:25.886312 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:25.886323 | orchestrator | 2025-09-18 10:47:25.886334 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-09-18 10:47:25.886345 | orchestrator | Thursday 18 September 2025 10:47:12 +0000 (0:01:25.020) 0:02:41.747 **** 2025-09-18 10:47:25.886356 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:25.886367 | orchestrator | 2025-09-18 10:47:25.886377 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-09-18 10:47:25.886389 | orchestrator | Thursday 18 September 2025 10:47:13 +0000 (0:00:00.474) 0:02:42.222 **** 2025-09-18 10:47:25.886400 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:25.886410 | orchestrator | 2025-09-18 10:47:25.886422 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-09-18 10:47:25.886433 | orchestrator | Thursday 18 September 2025 10:47:15 +0000 (0:00:02.571) 0:02:44.793 **** 2025-09-18 10:47:25.886443 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:25.886454 | orchestrator | 2025-09-18 10:47:25.886465 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-09-18 10:47:25.886476 | orchestrator | Thursday 18 September 2025 10:47:18 +0000 (0:00:02.405) 0:02:47.198 **** 2025-09-18 10:47:25.886487 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.886498 | orchestrator | 2025-09-18 10:47:25.886562 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-09-18 10:47:25.886597 | orchestrator | Thursday 18 September 2025 10:47:21 +0000 (0:00:02.765) 0:02:49.963 **** 2025-09-18 10:47:25.886609 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:25.886620 | orchestrator | 2025-09-18 10:47:25.886631 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:47:25.886643 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:47:25.886655 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:47:25.886667 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-09-18 10:47:25.886678 | orchestrator | 2025-09-18 10:47:25.886689 | orchestrator | 2025-09-18 10:47:25.886700 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:47:25.886718 | orchestrator | Thursday 18 September 2025 10:47:23 +0000 (0:00:02.459) 0:02:52.423 **** 2025-09-18 10:47:25.886729 | orchestrator | =============================================================================== 2025-09-18 10:47:25.886740 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 85.02s 2025-09-18 10:47:25.886751 | orchestrator | opensearch : Restart opensearch container ------------------------------ 56.22s 2025-09-18 10:47:25.886762 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.03s 2025-09-18 10:47:25.886772 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 2.92s 2025-09-18 10:47:25.886783 | orchestrator | opensearch : Create new log retention policy ---------------------------- 2.77s 2025-09-18 10:47:25.886794 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.57s 2025-09-18 10:47:25.886805 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.46s 2025-09-18 10:47:25.886816 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.41s 2025-09-18 10:47:25.886827 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.39s 2025-09-18 10:47:25.886837 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 2.27s 2025-09-18 10:47:25.886848 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.21s 2025-09-18 10:47:25.886859 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.85s 2025-09-18 10:47:25.886870 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.05s 2025-09-18 10:47:25.886881 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 0.81s 2025-09-18 10:47:25.886892 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.74s 2025-09-18 10:47:25.886902 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.54s 2025-09-18 10:47:25.886913 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.52s 2025-09-18 10:47:25.886924 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.49s 2025-09-18 10:47:25.886935 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.48s 2025-09-18 10:47:25.886946 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.47s 2025-09-18 10:47:25.886957 | orchestrator | 2025-09-18 10:47:25 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:25.886969 | orchestrator | 2025-09-18 10:47:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:28.928137 | orchestrator | 2025-09-18 10:47:28 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:28.928341 | orchestrator | 2025-09-18 10:47:28 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:28.928386 | orchestrator | 2025-09-18 10:47:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:31.976660 | orchestrator | 2025-09-18 10:47:31 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:31.978146 | orchestrator | 2025-09-18 10:47:31 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:31.978180 | orchestrator | 2025-09-18 10:47:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:35.022478 | orchestrator | 2025-09-18 10:47:35 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:35.023927 | orchestrator | 2025-09-18 10:47:35 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:35.024267 | orchestrator | 2025-09-18 10:47:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:38.058495 | orchestrator | 2025-09-18 10:47:38 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state STARTED 2025-09-18 10:47:38.060673 | orchestrator | 2025-09-18 10:47:38 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:38.060704 | orchestrator | 2025-09-18 10:47:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:41.101674 | orchestrator | 2025-09-18 10:47:41 | INFO  | Task bd1fc163-fcc1-4ce2-92cc-f2606a8add53 is in state SUCCESS 2025-09-18 10:47:41.102701 | orchestrator | 2025-09-18 10:47:41.102845 | orchestrator | 2025-09-18 10:47:41.103072 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-09-18 10:47:41.103102 | orchestrator | 2025-09-18 10:47:41.103115 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-09-18 10:47:41.103128 | orchestrator | Thursday 18 September 2025 10:44:31 +0000 (0:00:00.101) 0:00:00.101 **** 2025-09-18 10:47:41.103140 | orchestrator | ok: [localhost] => { 2025-09-18 10:47:41.103153 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-09-18 10:47:41.103166 | orchestrator | } 2025-09-18 10:47:41.103179 | orchestrator | 2025-09-18 10:47:41.103191 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-09-18 10:47:41.103203 | orchestrator | Thursday 18 September 2025 10:44:31 +0000 (0:00:00.054) 0:00:00.155 **** 2025-09-18 10:47:41.103215 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-09-18 10:47:41.103227 | orchestrator | ...ignoring 2025-09-18 10:47:41.103239 | orchestrator | 2025-09-18 10:47:41.103251 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-09-18 10:47:41.103263 | orchestrator | Thursday 18 September 2025 10:44:34 +0000 (0:00:02.881) 0:00:03.037 **** 2025-09-18 10:47:41.103275 | orchestrator | skipping: [localhost] 2025-09-18 10:47:41.103286 | orchestrator | 2025-09-18 10:47:41.103298 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-09-18 10:47:41.103310 | orchestrator | Thursday 18 September 2025 10:44:34 +0000 (0:00:00.051) 0:00:03.089 **** 2025-09-18 10:47:41.103322 | orchestrator | ok: [localhost] 2025-09-18 10:47:41.103333 | orchestrator | 2025-09-18 10:47:41.103345 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:47:41.103357 | orchestrator | 2025-09-18 10:47:41.103369 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:47:41.103381 | orchestrator | Thursday 18 September 2025 10:44:34 +0000 (0:00:00.166) 0:00:03.255 **** 2025-09-18 10:47:41.103392 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.103404 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.103416 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.103427 | orchestrator | 2025-09-18 10:47:41.103439 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:47:41.103478 | orchestrator | Thursday 18 September 2025 10:44:34 +0000 (0:00:00.316) 0:00:03.572 **** 2025-09-18 10:47:41.103490 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-09-18 10:47:41.103502 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-09-18 10:47:41.103513 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-09-18 10:47:41.103524 | orchestrator | 2025-09-18 10:47:41.103535 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-09-18 10:47:41.103546 | orchestrator | 2025-09-18 10:47:41.103557 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-09-18 10:47:41.103569 | orchestrator | Thursday 18 September 2025 10:44:35 +0000 (0:00:00.573) 0:00:04.145 **** 2025-09-18 10:47:41.103605 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-09-18 10:47:41.103618 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-09-18 10:47:41.103629 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-09-18 10:47:41.103640 | orchestrator | 2025-09-18 10:47:41.103651 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 10:47:41.103664 | orchestrator | Thursday 18 September 2025 10:44:35 +0000 (0:00:00.355) 0:00:04.501 **** 2025-09-18 10:47:41.103677 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:41.103690 | orchestrator | 2025-09-18 10:47:41.103703 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-09-18 10:47:41.103715 | orchestrator | Thursday 18 September 2025 10:44:36 +0000 (0:00:00.555) 0:00:05.056 **** 2025-09-18 10:47:41.103764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.103784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.103812 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.103827 | orchestrator | 2025-09-18 10:47:41.103849 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-09-18 10:47:41.103862 | orchestrator | Thursday 18 September 2025 10:44:39 +0000 (0:00:03.373) 0:00:08.430 **** 2025-09-18 10:47:41.103875 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.103889 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.103901 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.103913 | orchestrator | 2025-09-18 10:47:41.103925 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-09-18 10:47:41.103937 | orchestrator | Thursday 18 September 2025 10:44:40 +0000 (0:00:00.836) 0:00:09.267 **** 2025-09-18 10:47:41.103950 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.103962 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.103974 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.103987 | orchestrator | 2025-09-18 10:47:41.103999 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-09-18 10:47:41.104018 | orchestrator | Thursday 18 September 2025 10:44:41 +0000 (0:00:01.328) 0:00:10.596 **** 2025-09-18 10:47:41.104030 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104077 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104096 | orchestrator | 2025-09-18 10:47:41.104108 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-09-18 10:47:41.104118 | orchestrator | Thursday 18 September 2025 10:44:45 +0000 (0:00:03.768) 0:00:14.364 **** 2025-09-18 10:47:41.104130 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.104141 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.104152 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.104162 | orchestrator | 2025-09-18 10:47:41.104173 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-09-18 10:47:41.104185 | orchestrator | Thursday 18 September 2025 10:44:46 +0000 (0:00:01.139) 0:00:15.504 **** 2025-09-18 10:47:41.104196 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.104207 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:41.104218 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:41.104229 | orchestrator | 2025-09-18 10:47:41.104239 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 10:47:41.104251 | orchestrator | Thursday 18 September 2025 10:44:51 +0000 (0:00:04.759) 0:00:20.264 **** 2025-09-18 10:47:41.104262 | orchestrator | included: /ansible/roles/mariadb/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:41.104273 | orchestrator | 2025-09-18 10:47:41.104284 | orchestrator | TASK [service-cert-copy : mariadb | Copying over extra CA certificates] ******** 2025-09-18 10:47:41.104295 | orchestrator | Thursday 18 September 2025 10:44:52 +0000 (0:00:00.550) 0:00:20.814 **** 2025-09-18 10:47:41.104320 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104340 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.104353 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104365 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.104389 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104409 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.104420 | orchestrator | 2025-09-18 10:47:41.104431 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS certificate] *** 2025-09-18 10:47:41.104442 | orchestrator | Thursday 18 September 2025 10:44:55 +0000 (0:00:03.158) 0:00:23.973 **** 2025-09-18 10:47:41.104454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104466 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.104488 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104507 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.104519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104532 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.104543 | orchestrator | 2025-09-18 10:47:41.104554 | orchestrator | TASK [service-cert-copy : mariadb | Copying over backend internal TLS key] ***** 2025-09-18 10:47:41.104566 | orchestrator | Thursday 18 September 2025 10:44:57 +0000 (0:00:02.560) 0:00:26.533 **** 2025-09-18 10:47:41.104602 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104622 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.104642 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104655 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.104667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-09-18 10:47:41.104685 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.104696 | orchestrator | 2025-09-18 10:47:41.104707 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-09-18 10:47:41.104718 | orchestrator | Thursday 18 September 2025 10:45:00 +0000 (0:00:02.609) 0:00:29.143 **** 2025-09-18 10:47:41.104743 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104757 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104790 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/mariadb-server:2024.2', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', ''], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 3306 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 3306 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 3306 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-09-18 10:47:41.104803 | orchestrator | 2025-09-18 10:47:41.104814 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-09-18 10:47:41.104826 | orchestrator | Thursday 18 September 2025 10:45:03 +0000 (0:00:03.061) 0:00:32.205 **** 2025-09-18 10:47:41.104837 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.104848 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:41.104860 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:41.104871 | orchestrator | 2025-09-18 10:47:41.104882 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-09-18 10:47:41.104893 | orchestrator | Thursday 18 September 2025 10:45:04 +0000 (0:00:00.950) 0:00:33.155 **** 2025-09-18 10:47:41.104904 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.104915 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.104926 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.104937 | orchestrator | 2025-09-18 10:47:41.104949 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-09-18 10:47:41.104959 | orchestrator | Thursday 18 September 2025 10:45:04 +0000 (0:00:00.406) 0:00:33.562 **** 2025-09-18 10:47:41.104971 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.104982 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.104993 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.105004 | orchestrator | 2025-09-18 10:47:41.105014 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-09-18 10:47:41.105025 | orchestrator | Thursday 18 September 2025 10:45:05 +0000 (0:00:00.324) 0:00:33.886 **** 2025-09-18 10:47:41.105037 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-09-18 10:47:41.105049 | orchestrator | ...ignoring 2025-09-18 10:47:41.105060 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-09-18 10:47:41.105071 | orchestrator | ...ignoring 2025-09-18 10:47:41.105089 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-09-18 10:47:41.105100 | orchestrator | ...ignoring 2025-09-18 10:47:41.105111 | orchestrator | 2025-09-18 10:47:41.105122 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-09-18 10:47:41.105133 | orchestrator | Thursday 18 September 2025 10:45:16 +0000 (0:00:10.885) 0:00:44.771 **** 2025-09-18 10:47:41.105144 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.105155 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.105165 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.105177 | orchestrator | 2025-09-18 10:47:41.105188 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-09-18 10:47:41.105199 | orchestrator | Thursday 18 September 2025 10:45:16 +0000 (0:00:00.441) 0:00:45.213 **** 2025-09-18 10:47:41.105209 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105221 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105232 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105243 | orchestrator | 2025-09-18 10:47:41.105254 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-09-18 10:47:41.105264 | orchestrator | Thursday 18 September 2025 10:45:17 +0000 (0:00:00.678) 0:00:45.892 **** 2025-09-18 10:47:41.105275 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105286 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105297 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105308 | orchestrator | 2025-09-18 10:47:41.105319 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-09-18 10:47:41.105330 | orchestrator | Thursday 18 September 2025 10:45:17 +0000 (0:00:00.486) 0:00:46.378 **** 2025-09-18 10:47:41.105341 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105353 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105364 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105375 | orchestrator | 2025-09-18 10:47:41.105386 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-09-18 10:47:41.105397 | orchestrator | Thursday 18 September 2025 10:45:18 +0000 (0:00:00.438) 0:00:46.817 **** 2025-09-18 10:47:41.105408 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.105419 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.105430 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.105441 | orchestrator | 2025-09-18 10:47:41.105460 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-09-18 10:47:41.105472 | orchestrator | Thursday 18 September 2025 10:45:18 +0000 (0:00:00.468) 0:00:47.285 **** 2025-09-18 10:47:41.105489 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105500 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105511 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105522 | orchestrator | 2025-09-18 10:47:41.105534 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 10:47:41.105545 | orchestrator | Thursday 18 September 2025 10:45:19 +0000 (0:00:00.856) 0:00:48.142 **** 2025-09-18 10:47:41.105556 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105567 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105594 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-09-18 10:47:41.105606 | orchestrator | 2025-09-18 10:47:41.105617 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-09-18 10:47:41.105628 | orchestrator | Thursday 18 September 2025 10:45:19 +0000 (0:00:00.410) 0:00:48.553 **** 2025-09-18 10:47:41.105639 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.105650 | orchestrator | 2025-09-18 10:47:41.105661 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-09-18 10:47:41.105672 | orchestrator | Thursday 18 September 2025 10:45:29 +0000 (0:00:09.973) 0:00:58.526 **** 2025-09-18 10:47:41.105683 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.105694 | orchestrator | 2025-09-18 10:47:41.105712 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-09-18 10:47:41.105723 | orchestrator | Thursday 18 September 2025 10:45:29 +0000 (0:00:00.123) 0:00:58.650 **** 2025-09-18 10:47:41.105734 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105745 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.105756 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.105767 | orchestrator | 2025-09-18 10:47:41.105778 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-09-18 10:47:41.105790 | orchestrator | Thursday 18 September 2025 10:45:30 +0000 (0:00:01.082) 0:00:59.732 **** 2025-09-18 10:47:41.105801 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.105812 | orchestrator | 2025-09-18 10:47:41.105823 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-09-18 10:47:41.105834 | orchestrator | Thursday 18 September 2025 10:45:38 +0000 (0:00:07.080) 0:01:06.813 **** 2025-09-18 10:47:41.105845 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.105857 | orchestrator | 2025-09-18 10:47:41.105868 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-09-18 10:47:41.105879 | orchestrator | Thursday 18 September 2025 10:45:39 +0000 (0:00:01.611) 0:01:08.424 **** 2025-09-18 10:47:41.105890 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.105900 | orchestrator | 2025-09-18 10:47:41.105911 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-09-18 10:47:41.105922 | orchestrator | Thursday 18 September 2025 10:45:42 +0000 (0:00:02.493) 0:01:10.918 **** 2025-09-18 10:47:41.105933 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.105945 | orchestrator | 2025-09-18 10:47:41.105956 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-09-18 10:47:41.105967 | orchestrator | Thursday 18 September 2025 10:45:42 +0000 (0:00:00.114) 0:01:11.032 **** 2025-09-18 10:47:41.105978 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.105989 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.106000 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.106011 | orchestrator | 2025-09-18 10:47:41.106074 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-09-18 10:47:41.106085 | orchestrator | Thursday 18 September 2025 10:45:42 +0000 (0:00:00.287) 0:01:11.320 **** 2025-09-18 10:47:41.106096 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.106107 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-09-18 10:47:41.106119 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:41.106130 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:41.106141 | orchestrator | 2025-09-18 10:47:41.106152 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-09-18 10:47:41.106163 | orchestrator | skipping: no hosts matched 2025-09-18 10:47:41.106175 | orchestrator | 2025-09-18 10:47:41.106186 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-18 10:47:41.106197 | orchestrator | 2025-09-18 10:47:41.106208 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 10:47:41.106219 | orchestrator | Thursday 18 September 2025 10:45:42 +0000 (0:00:00.412) 0:01:11.732 **** 2025-09-18 10:47:41.106230 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:47:41.106241 | orchestrator | 2025-09-18 10:47:41.106253 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 10:47:41.106264 | orchestrator | Thursday 18 September 2025 10:46:01 +0000 (0:00:19.002) 0:01:30.735 **** 2025-09-18 10:47:41.106275 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.106286 | orchestrator | 2025-09-18 10:47:41.106297 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 10:47:41.106308 | orchestrator | Thursday 18 September 2025 10:46:22 +0000 (0:00:20.675) 0:01:51.410 **** 2025-09-18 10:47:41.106319 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.106330 | orchestrator | 2025-09-18 10:47:41.106342 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-09-18 10:47:41.106361 | orchestrator | 2025-09-18 10:47:41.106372 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 10:47:41.106383 | orchestrator | Thursday 18 September 2025 10:46:25 +0000 (0:00:02.522) 0:01:53.933 **** 2025-09-18 10:47:41.106394 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:47:41.106405 | orchestrator | 2025-09-18 10:47:41.106416 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 10:47:41.106427 | orchestrator | Thursday 18 September 2025 10:46:44 +0000 (0:00:19.624) 0:02:13.558 **** 2025-09-18 10:47:41.106439 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.106450 | orchestrator | 2025-09-18 10:47:41.106461 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 10:47:41.106472 | orchestrator | Thursday 18 September 2025 10:47:05 +0000 (0:00:20.604) 0:02:34.162 **** 2025-09-18 10:47:41.106488 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.106500 | orchestrator | 2025-09-18 10:47:41.106511 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-09-18 10:47:41.106522 | orchestrator | 2025-09-18 10:47:41.106540 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-09-18 10:47:41.106552 | orchestrator | Thursday 18 September 2025 10:47:08 +0000 (0:00:02.671) 0:02:36.833 **** 2025-09-18 10:47:41.106563 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.106601 | orchestrator | 2025-09-18 10:47:41.106614 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-09-18 10:47:41.106625 | orchestrator | Thursday 18 September 2025 10:47:24 +0000 (0:00:16.747) 0:02:53.581 **** 2025-09-18 10:47:41.106636 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.106647 | orchestrator | 2025-09-18 10:47:41.106658 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-09-18 10:47:41.106669 | orchestrator | Thursday 18 September 2025 10:47:25 +0000 (0:00:00.570) 0:02:54.151 **** 2025-09-18 10:47:41.106680 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.106691 | orchestrator | 2025-09-18 10:47:41.106702 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-09-18 10:47:41.106713 | orchestrator | 2025-09-18 10:47:41.106724 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-09-18 10:47:41.106735 | orchestrator | Thursday 18 September 2025 10:47:28 +0000 (0:00:02.737) 0:02:56.888 **** 2025-09-18 10:47:41.106746 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:47:41.106757 | orchestrator | 2025-09-18 10:47:41.106768 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-09-18 10:47:41.106779 | orchestrator | Thursday 18 September 2025 10:47:28 +0000 (0:00:00.539) 0:02:57.428 **** 2025-09-18 10:47:41.106790 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.106802 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.106813 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.106824 | orchestrator | 2025-09-18 10:47:41.106835 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-09-18 10:47:41.106846 | orchestrator | Thursday 18 September 2025 10:47:30 +0000 (0:00:02.327) 0:02:59.756 **** 2025-09-18 10:47:41.106857 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.106869 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.106880 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.106891 | orchestrator | 2025-09-18 10:47:41.106902 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-09-18 10:47:41.106913 | orchestrator | Thursday 18 September 2025 10:47:33 +0000 (0:00:02.291) 0:03:02.048 **** 2025-09-18 10:47:41.106925 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.106936 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.106947 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.106958 | orchestrator | 2025-09-18 10:47:41.106969 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-09-18 10:47:41.106980 | orchestrator | Thursday 18 September 2025 10:47:35 +0000 (0:00:02.183) 0:03:04.232 **** 2025-09-18 10:47:41.106998 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.107009 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.107021 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:47:41.107032 | orchestrator | 2025-09-18 10:47:41.107043 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-09-18 10:47:41.107054 | orchestrator | Thursday 18 September 2025 10:47:37 +0000 (0:00:02.167) 0:03:06.400 **** 2025-09-18 10:47:41.107065 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:47:41.107076 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:47:41.107087 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:47:41.107098 | orchestrator | 2025-09-18 10:47:41.107110 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-09-18 10:47:41.107121 | orchestrator | Thursday 18 September 2025 10:47:40 +0000 (0:00:02.652) 0:03:09.052 **** 2025-09-18 10:47:41.107132 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:47:41.107143 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:47:41.107154 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:47:41.107165 | orchestrator | 2025-09-18 10:47:41.107176 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:47:41.107188 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-09-18 10:47:41.107199 | orchestrator | testbed-node-0 : ok=34  changed=16  unreachable=0 failed=0 skipped=11  rescued=0 ignored=1  2025-09-18 10:47:41.107212 | orchestrator | testbed-node-1 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-18 10:47:41.107223 | orchestrator | testbed-node-2 : ok=20  changed=7  unreachable=0 failed=0 skipped=18  rescued=0 ignored=1  2025-09-18 10:47:41.107234 | orchestrator | 2025-09-18 10:47:41.107246 | orchestrator | 2025-09-18 10:47:41.107257 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:47:41.107268 | orchestrator | Thursday 18 September 2025 10:47:40 +0000 (0:00:00.476) 0:03:09.529 **** 2025-09-18 10:47:41.107279 | orchestrator | =============================================================================== 2025-09-18 10:47:41.107290 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 41.28s 2025-09-18 10:47:41.107301 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 38.63s 2025-09-18 10:47:41.107313 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 16.75s 2025-09-18 10:47:41.107323 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 10.89s 2025-09-18 10:47:41.107334 | orchestrator | mariadb : Running MariaDB bootstrap container --------------------------- 9.97s 2025-09-18 10:47:41.107350 | orchestrator | mariadb : Starting first MariaDB container ------------------------------ 7.08s 2025-09-18 10:47:41.107368 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.19s 2025-09-18 10:47:41.107380 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 4.76s 2025-09-18 10:47:41.107390 | orchestrator | mariadb : Copying over config.json files for services ------------------- 3.77s 2025-09-18 10:47:41.107402 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 3.37s 2025-09-18 10:47:41.107412 | orchestrator | service-cert-copy : mariadb | Copying over extra CA certificates -------- 3.16s 2025-09-18 10:47:41.107423 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.06s 2025-09-18 10:47:41.107434 | orchestrator | Check MariaDB service --------------------------------------------------- 2.88s 2025-09-18 10:47:41.107445 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.74s 2025-09-18 10:47:41.107456 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 2.65s 2025-09-18 10:47:41.107475 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS key ----- 2.61s 2025-09-18 10:47:41.107486 | orchestrator | service-cert-copy : mariadb | Copying over backend internal TLS certificate --- 2.56s 2025-09-18 10:47:41.107498 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.49s 2025-09-18 10:47:41.107509 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.33s 2025-09-18 10:47:41.107520 | orchestrator | mariadb : Creating mysql monitor user ----------------------------------- 2.29s 2025-09-18 10:47:41.107531 | orchestrator | 2025-09-18 10:47:41 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:41.107542 | orchestrator | 2025-09-18 10:47:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:44.158270 | orchestrator | 2025-09-18 10:47:44 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:44.160813 | orchestrator | 2025-09-18 10:47:44 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:44.163994 | orchestrator | 2025-09-18 10:47:44 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:44.164020 | orchestrator | 2025-09-18 10:47:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:47.208266 | orchestrator | 2025-09-18 10:47:47 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:47.209627 | orchestrator | 2025-09-18 10:47:47 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:47.211210 | orchestrator | 2025-09-18 10:47:47 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:47.211234 | orchestrator | 2025-09-18 10:47:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:50.246297 | orchestrator | 2025-09-18 10:47:50 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:50.247280 | orchestrator | 2025-09-18 10:47:50 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:50.248552 | orchestrator | 2025-09-18 10:47:50 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:50.248631 | orchestrator | 2025-09-18 10:47:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:53.288099 | orchestrator | 2025-09-18 10:47:53 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:53.289768 | orchestrator | 2025-09-18 10:47:53 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:53.290915 | orchestrator | 2025-09-18 10:47:53 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:53.290945 | orchestrator | 2025-09-18 10:47:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:56.325722 | orchestrator | 2025-09-18 10:47:56 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:56.330336 | orchestrator | 2025-09-18 10:47:56 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:56.331871 | orchestrator | 2025-09-18 10:47:56 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:56.332741 | orchestrator | 2025-09-18 10:47:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:47:59.374779 | orchestrator | 2025-09-18 10:47:59 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:47:59.376018 | orchestrator | 2025-09-18 10:47:59 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:47:59.377884 | orchestrator | 2025-09-18 10:47:59 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:47:59.377945 | orchestrator | 2025-09-18 10:47:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:02.425813 | orchestrator | 2025-09-18 10:48:02 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:02.426207 | orchestrator | 2025-09-18 10:48:02 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:02.427615 | orchestrator | 2025-09-18 10:48:02 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:02.427919 | orchestrator | 2025-09-18 10:48:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:05.469402 | orchestrator | 2025-09-18 10:48:05 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:05.471671 | orchestrator | 2025-09-18 10:48:05 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:05.473212 | orchestrator | 2025-09-18 10:48:05 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:05.473335 | orchestrator | 2025-09-18 10:48:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:08.516711 | orchestrator | 2025-09-18 10:48:08 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:08.517068 | orchestrator | 2025-09-18 10:48:08 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:08.518153 | orchestrator | 2025-09-18 10:48:08 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:08.518181 | orchestrator | 2025-09-18 10:48:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:11.565156 | orchestrator | 2025-09-18 10:48:11 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:11.565990 | orchestrator | 2025-09-18 10:48:11 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:11.567062 | orchestrator | 2025-09-18 10:48:11 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:11.567503 | orchestrator | 2025-09-18 10:48:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:14.615011 | orchestrator | 2025-09-18 10:48:14 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:14.615726 | orchestrator | 2025-09-18 10:48:14 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:14.616664 | orchestrator | 2025-09-18 10:48:14 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:14.616689 | orchestrator | 2025-09-18 10:48:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:17.655694 | orchestrator | 2025-09-18 10:48:17 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:17.657040 | orchestrator | 2025-09-18 10:48:17 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:17.657794 | orchestrator | 2025-09-18 10:48:17 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:17.657896 | orchestrator | 2025-09-18 10:48:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:20.703051 | orchestrator | 2025-09-18 10:48:20 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:20.703527 | orchestrator | 2025-09-18 10:48:20 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:20.704642 | orchestrator | 2025-09-18 10:48:20 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:20.704667 | orchestrator | 2025-09-18 10:48:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:23.758138 | orchestrator | 2025-09-18 10:48:23 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:23.759796 | orchestrator | 2025-09-18 10:48:23 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:23.764615 | orchestrator | 2025-09-18 10:48:23 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:23.764673 | orchestrator | 2025-09-18 10:48:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:26.816001 | orchestrator | 2025-09-18 10:48:26 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:26.817054 | orchestrator | 2025-09-18 10:48:26 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:26.817718 | orchestrator | 2025-09-18 10:48:26 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:26.817924 | orchestrator | 2025-09-18 10:48:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:29.863138 | orchestrator | 2025-09-18 10:48:29 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:29.863224 | orchestrator | 2025-09-18 10:48:29 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:29.863864 | orchestrator | 2025-09-18 10:48:29 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:29.864112 | orchestrator | 2025-09-18 10:48:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:32.908509 | orchestrator | 2025-09-18 10:48:32 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:32.909485 | orchestrator | 2025-09-18 10:48:32 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:32.910685 | orchestrator | 2025-09-18 10:48:32 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:32.910975 | orchestrator | 2025-09-18 10:48:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:35.964449 | orchestrator | 2025-09-18 10:48:35 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:35.968081 | orchestrator | 2025-09-18 10:48:35 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:35.970223 | orchestrator | 2025-09-18 10:48:35 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:35.970510 | orchestrator | 2025-09-18 10:48:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:39.043978 | orchestrator | 2025-09-18 10:48:39 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:39.044058 | orchestrator | 2025-09-18 10:48:39 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:39.044072 | orchestrator | 2025-09-18 10:48:39 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:39.044084 | orchestrator | 2025-09-18 10:48:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:42.101325 | orchestrator | 2025-09-18 10:48:42 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:42.101783 | orchestrator | 2025-09-18 10:48:42 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:42.103068 | orchestrator | 2025-09-18 10:48:42 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:42.103162 | orchestrator | 2025-09-18 10:48:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:45.153611 | orchestrator | 2025-09-18 10:48:45 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:45.156051 | orchestrator | 2025-09-18 10:48:45 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:45.158658 | orchestrator | 2025-09-18 10:48:45 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:45.158687 | orchestrator | 2025-09-18 10:48:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:48.213603 | orchestrator | 2025-09-18 10:48:48 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:48.213712 | orchestrator | 2025-09-18 10:48:48 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:48.214874 | orchestrator | 2025-09-18 10:48:48 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:48.214899 | orchestrator | 2025-09-18 10:48:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:51.266483 | orchestrator | 2025-09-18 10:48:51 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:51.266636 | orchestrator | 2025-09-18 10:48:51 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:51.268241 | orchestrator | 2025-09-18 10:48:51 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:51.268266 | orchestrator | 2025-09-18 10:48:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:54.318171 | orchestrator | 2025-09-18 10:48:54 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:54.320396 | orchestrator | 2025-09-18 10:48:54 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:54.322218 | orchestrator | 2025-09-18 10:48:54 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:54.322470 | orchestrator | 2025-09-18 10:48:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:48:57.365659 | orchestrator | 2025-09-18 10:48:57 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state STARTED 2025-09-18 10:48:57.365746 | orchestrator | 2025-09-18 10:48:57 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:48:57.366758 | orchestrator | 2025-09-18 10:48:57 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:48:57.366784 | orchestrator | 2025-09-18 10:48:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:00.432944 | orchestrator | 2025-09-18 10:49:00 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:00.438517 | orchestrator | 2025-09-18 10:49:00 | INFO  | Task 884e6d54-294b-4c3e-88c1-85ae99056f61 is in state SUCCESS 2025-09-18 10:49:00.440082 | orchestrator | 2025-09-18 10:49:00.440113 | orchestrator | 2025-09-18 10:49:00.440125 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-09-18 10:49:00.440137 | orchestrator | 2025-09-18 10:49:00.440148 | orchestrator | TASK [ceph-facts : Include facts.yml] ****************************************** 2025-09-18 10:49:00.440161 | orchestrator | Thursday 18 September 2025 10:46:45 +0000 (0:00:00.705) 0:00:00.705 **** 2025-09-18 10:49:00.440172 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:49:00.440185 | orchestrator | 2025-09-18 10:49:00.440196 | orchestrator | TASK [ceph-facts : Check if it is atomic host] ********************************* 2025-09-18 10:49:00.440207 | orchestrator | Thursday 18 September 2025 10:46:46 +0000 (0:00:00.653) 0:00:01.358 **** 2025-09-18 10:49:00.440219 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.440231 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.440242 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.440279 | orchestrator | 2025-09-18 10:49:00.440291 | orchestrator | TASK [ceph-facts : Set_fact is_atomic] ***************************************** 2025-09-18 10:49:00.440302 | orchestrator | Thursday 18 September 2025 10:46:46 +0000 (0:00:00.656) 0:00:02.014 **** 2025-09-18 10:49:00.440313 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.440324 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.440335 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.440345 | orchestrator | 2025-09-18 10:49:00.440419 | orchestrator | TASK [ceph-facts : Check if podman binary is present] ************************** 2025-09-18 10:49:00.440434 | orchestrator | Thursday 18 September 2025 10:46:47 +0000 (0:00:00.289) 0:00:02.304 **** 2025-09-18 10:49:00.440445 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.440456 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.440521 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.440847 | orchestrator | 2025-09-18 10:49:00.440861 | orchestrator | TASK [ceph-facts : Set_fact container_binary] ********************************** 2025-09-18 10:49:00.440872 | orchestrator | Thursday 18 September 2025 10:46:47 +0000 (0:00:00.831) 0:00:03.136 **** 2025-09-18 10:49:00.440884 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.440894 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.440905 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.440916 | orchestrator | 2025-09-18 10:49:00.440927 | orchestrator | TASK [ceph-facts : Set_fact ceph_cmd] ****************************************** 2025-09-18 10:49:00.440938 | orchestrator | Thursday 18 September 2025 10:46:48 +0000 (0:00:00.324) 0:00:03.460 **** 2025-09-18 10:49:00.440949 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.440960 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.440970 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.440981 | orchestrator | 2025-09-18 10:49:00.440992 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python] ********************* 2025-09-18 10:49:00.441004 | orchestrator | Thursday 18 September 2025 10:46:48 +0000 (0:00:00.309) 0:00:03.770 **** 2025-09-18 10:49:00.441015 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.441040 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.441052 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.441062 | orchestrator | 2025-09-18 10:49:00.441074 | orchestrator | TASK [ceph-facts : Set_fact discovered_interpreter_python if not previously set] *** 2025-09-18 10:49:00.441084 | orchestrator | Thursday 18 September 2025 10:46:48 +0000 (0:00:00.306) 0:00:04.076 **** 2025-09-18 10:49:00.441096 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.441108 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.441128 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.441139 | orchestrator | 2025-09-18 10:49:00.441150 | orchestrator | TASK [ceph-facts : Set_fact ceph_release ceph_stable_release] ****************** 2025-09-18 10:49:00.441161 | orchestrator | Thursday 18 September 2025 10:46:49 +0000 (0:00:00.514) 0:00:04.591 **** 2025-09-18 10:49:00.441172 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.441183 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.441194 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.441204 | orchestrator | 2025-09-18 10:49:00.441215 | orchestrator | TASK [ceph-facts : Set_fact monitor_name ansible_facts['hostname']] ************ 2025-09-18 10:49:00.441226 | orchestrator | Thursday 18 September 2025 10:46:49 +0000 (0:00:00.300) 0:00:04.891 **** 2025-09-18 10:49:00.441237 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:49:00.441248 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:49:00.441259 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:49:00.441270 | orchestrator | 2025-09-18 10:49:00.441281 | orchestrator | TASK [ceph-facts : Set_fact container_exec_cmd] ******************************** 2025-09-18 10:49:00.441292 | orchestrator | Thursday 18 September 2025 10:46:50 +0000 (0:00:00.681) 0:00:05.573 **** 2025-09-18 10:49:00.441303 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.441314 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.441325 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.441359 | orchestrator | 2025-09-18 10:49:00.441370 | orchestrator | TASK [ceph-facts : Find a running mon container] ******************************* 2025-09-18 10:49:00.441381 | orchestrator | Thursday 18 September 2025 10:46:50 +0000 (0:00:00.428) 0:00:06.002 **** 2025-09-18 10:49:00.441405 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:49:00.441416 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:49:00.441427 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:49:00.441438 | orchestrator | 2025-09-18 10:49:00.441449 | orchestrator | TASK [ceph-facts : Check for a ceph mon socket] ******************************** 2025-09-18 10:49:00.441462 | orchestrator | Thursday 18 September 2025 10:46:52 +0000 (0:00:02.136) 0:00:08.138 **** 2025-09-18 10:49:00.441474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 10:49:00.441487 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 10:49:00.441500 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 10:49:00.441512 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.441539 | orchestrator | 2025-09-18 10:49:00.441553 | orchestrator | TASK [ceph-facts : Check if the ceph mon socket is in-use] ********************* 2025-09-18 10:49:00.441576 | orchestrator | Thursday 18 September 2025 10:46:53 +0000 (0:00:00.406) 0:00:08.545 **** 2025-09-18 10:49:00.441591 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.441608 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.441621 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.441635 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.441647 | orchestrator | 2025-09-18 10:49:00.442180 | orchestrator | TASK [ceph-facts : Set_fact running_mon - non_container] *********************** 2025-09-18 10:49:00.442203 | orchestrator | Thursday 18 September 2025 10:46:54 +0000 (0:00:00.820) 0:00:09.366 **** 2025-09-18 10:49:00.442217 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.442232 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.442245 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.442256 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442268 | orchestrator | 2025-09-18 10:49:00.442280 | orchestrator | TASK [ceph-facts : Set_fact running_mon - container] *************************** 2025-09-18 10:49:00.442303 | orchestrator | Thursday 18 September 2025 10:46:54 +0000 (0:00:00.161) 0:00:09.527 **** 2025-09-18 10:49:00.442317 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '97e3a205cade', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-09-18 10:46:51.536403', 'end': '2025-09-18 10:46:51.571197', 'delta': '0:00:00.034794', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['97e3a205cade'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-09-18 10:49:00.442339 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': '8711a540ed38', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-09-18 10:46:52.294045', 'end': '2025-09-18 10:46:52.336271', 'delta': '0:00:00.042226', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['8711a540ed38'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-09-18 10:49:00.442385 | orchestrator | ok: [testbed-node-3] => (item={'changed': False, 'stdout': 'bef85e25073a', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-09-18 10:46:52.800158', 'end': '2025-09-18 10:46:52.849381', 'delta': '0:00:00.049223', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['bef85e25073a'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-09-18 10:49:00.442399 | orchestrator | 2025-09-18 10:49:00.442411 | orchestrator | TASK [ceph-facts : Set_fact _container_exec_cmd] ******************************* 2025-09-18 10:49:00.442422 | orchestrator | Thursday 18 September 2025 10:46:54 +0000 (0:00:00.409) 0:00:09.937 **** 2025-09-18 10:49:00.442434 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.442445 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.442456 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.442468 | orchestrator | 2025-09-18 10:49:00.442558 | orchestrator | TASK [ceph-facts : Get current fsid if cluster is already running] ************* 2025-09-18 10:49:00.442570 | orchestrator | Thursday 18 September 2025 10:46:55 +0000 (0:00:00.456) 0:00:10.394 **** 2025-09-18 10:49:00.442582 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-09-18 10:49:00.442593 | orchestrator | 2025-09-18 10:49:00.442604 | orchestrator | TASK [ceph-facts : Set_fact current_fsid rc 1] ********************************* 2025-09-18 10:49:00.442615 | orchestrator | Thursday 18 September 2025 10:46:56 +0000 (0:00:01.686) 0:00:12.080 **** 2025-09-18 10:49:00.442626 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442637 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.442648 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.442659 | orchestrator | 2025-09-18 10:49:00.442670 | orchestrator | TASK [ceph-facts : Get current fsid] ******************************************* 2025-09-18 10:49:00.442681 | orchestrator | Thursday 18 September 2025 10:46:57 +0000 (0:00:00.312) 0:00:12.393 **** 2025-09-18 10:49:00.442691 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442712 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.442723 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.442734 | orchestrator | 2025-09-18 10:49:00.442745 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 10:49:00.442756 | orchestrator | Thursday 18 September 2025 10:46:57 +0000 (0:00:00.430) 0:00:12.823 **** 2025-09-18 10:49:00.442766 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442777 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.442788 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.442799 | orchestrator | 2025-09-18 10:49:00.442810 | orchestrator | TASK [ceph-facts : Set_fact fsid from current_fsid] **************************** 2025-09-18 10:49:00.442821 | orchestrator | Thursday 18 September 2025 10:46:58 +0000 (0:00:00.571) 0:00:13.394 **** 2025-09-18 10:49:00.442832 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.442843 | orchestrator | 2025-09-18 10:49:00.442854 | orchestrator | TASK [ceph-facts : Generate cluster fsid] ************************************** 2025-09-18 10:49:00.442865 | orchestrator | Thursday 18 September 2025 10:46:58 +0000 (0:00:00.138) 0:00:13.533 **** 2025-09-18 10:49:00.442876 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442887 | orchestrator | 2025-09-18 10:49:00.442898 | orchestrator | TASK [ceph-facts : Set_fact fsid] ********************************************** 2025-09-18 10:49:00.442909 | orchestrator | Thursday 18 September 2025 10:46:58 +0000 (0:00:00.242) 0:00:13.775 **** 2025-09-18 10:49:00.442920 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442931 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.442942 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.442953 | orchestrator | 2025-09-18 10:49:00.442964 | orchestrator | TASK [ceph-facts : Resolve device link(s)] ************************************* 2025-09-18 10:49:00.442975 | orchestrator | Thursday 18 September 2025 10:46:58 +0000 (0:00:00.289) 0:00:14.065 **** 2025-09-18 10:49:00.442985 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.442996 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443008 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443019 | orchestrator | 2025-09-18 10:49:00.443029 | orchestrator | TASK [ceph-facts : Set_fact build devices from resolved symlinks] ************** 2025-09-18 10:49:00.443040 | orchestrator | Thursday 18 September 2025 10:46:59 +0000 (0:00:00.337) 0:00:14.403 **** 2025-09-18 10:49:00.443051 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.443062 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443073 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443084 | orchestrator | 2025-09-18 10:49:00.443095 | orchestrator | TASK [ceph-facts : Resolve dedicated_device link(s)] *************************** 2025-09-18 10:49:00.443105 | orchestrator | Thursday 18 September 2025 10:46:59 +0000 (0:00:00.522) 0:00:14.925 **** 2025-09-18 10:49:00.443122 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.443133 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443144 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443155 | orchestrator | 2025-09-18 10:49:00.443166 | orchestrator | TASK [ceph-facts : Set_fact build dedicated_devices from resolved symlinks] **** 2025-09-18 10:49:00.443179 | orchestrator | Thursday 18 September 2025 10:47:00 +0000 (0:00:00.329) 0:00:15.255 **** 2025-09-18 10:49:00.443192 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.443204 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443217 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443229 | orchestrator | 2025-09-18 10:49:00.443241 | orchestrator | TASK [ceph-facts : Resolve bluestore_wal_device link(s)] *********************** 2025-09-18 10:49:00.443254 | orchestrator | Thursday 18 September 2025 10:47:00 +0000 (0:00:00.323) 0:00:15.578 **** 2025-09-18 10:49:00.443266 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.443279 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443291 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443302 | orchestrator | 2025-09-18 10:49:00.443315 | orchestrator | TASK [ceph-facts : Set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-09-18 10:49:00.443361 | orchestrator | Thursday 18 September 2025 10:47:00 +0000 (0:00:00.364) 0:00:15.942 **** 2025-09-18 10:49:00.443387 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.443400 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.443412 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.443425 | orchestrator | 2025-09-18 10:49:00.443438 | orchestrator | TASK [ceph-facts : Collect existed devices] ************************************ 2025-09-18 10:49:00.443451 | orchestrator | Thursday 18 September 2025 10:47:01 +0000 (0:00:00.557) 0:00:16.499 **** 2025-09-18 10:49:00.443464 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12', 'dm-uuid-LVM-Vaw5CJk2C3mO0tSxBixUJ0g2po36vShOlLaLHQgIe5no13lbKLZquyFXaJIAjng0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443480 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f', 'dm-uuid-LVM-hzyUORPsuwHDklNrX83rcpbROwYLAjcCmjBB4YT4C0vy2i12R6hzhkFvyVNjl7Ie'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443493 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443568 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443637 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443662 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.443698 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M1FVlv-9D5q-lxd6-Riu1-KasI-Hcdt-5OI0oS', 'scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2', 'scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.443748 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7', 'dm-uuid-LVM-yC7WLVYhTkV75h34D1fzIvnr47MYnyLcfJJ6y9smbo0iQM2OTQm1fH2b0thbo0ZN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443762 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4X8uJ2-YZWG-qtzE-dAFO-At1b-uENL-AqM0bt', 'scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040', 'scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.443774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e', 'dm-uuid-LVM-eWaIkBgFirJ2OipAcVXLq4k3mPWELeYtbQiUTn3TezFU30xUcRw7G8STGryTifyp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443785 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2', 'scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.443797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443809 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.443825 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443877 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443890 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443924 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443936 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.443979 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dcfXwQ-o4x6-2xYP-2XW1-PBCd-GWqM-v290he', 'scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70', 'scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444015 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mPVnVN-zO8W-fS7j-zylO-aYxp-imbs-kPqbMC', 'scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940', 'scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444026 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.444038 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4', 'scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444062 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.444083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e', 'dm-uuid-LVM-0fKX4cfymsz5amMcqzBfiCgoZkhUeauNxj0GsySfSMS8VgLgqJt1MG0b7sDje6Kx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444101 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168', 'dm-uuid-LVM-tu4MSQ7U1BANsHFB4tWHe0vFmhIQTVwpi62cEXVQfQHrQvMXvt2TqyheRXw2ewup'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444113 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444124 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444136 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444158 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444169 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444180 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444203 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-09-18 10:49:00.444226 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444239 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mfSKw9-qlCs-k70n-J14h-tqMO-YAjr-iRt747', 'scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52', 'scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444251 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TTS66v-0VlV-Zuar-0Fk8-NhPj-NrRf-nhrckx', 'scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836', 'scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444274 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64', 'scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-09-18 10:49:00.444303 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.444315 | orchestrator | 2025-09-18 10:49:00.444326 | orchestrator | TASK [ceph-facts : Set_fact devices generate device list when osd_auto_discovery] *** 2025-09-18 10:49:00.444337 | orchestrator | Thursday 18 September 2025 10:47:01 +0000 (0:00:00.643) 0:00:17.143 **** 2025-09-18 10:49:00.444348 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12', 'dm-uuid-LVM-Vaw5CJk2C3mO0tSxBixUJ0g2po36vShOlLaLHQgIe5no13lbKLZquyFXaJIAjng0'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444360 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f', 'dm-uuid-LVM-hzyUORPsuwHDklNrX83rcpbROwYLAjcCmjBB4YT4C0vy2i12R6hzhkFvyVNjl7Ie'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444372 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444490 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444508 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444548 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444561 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444572 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444584 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7', 'dm-uuid-LVM-yC7WLVYhTkV75h34D1fzIvnr47MYnyLcfJJ6y9smbo0iQM2OTQm1fH2b0thbo0ZN'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444602 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444677 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e', 'dm-uuid-LVM-eWaIkBgFirJ2OipAcVXLq4k3mPWELeYtbQiUTn3TezFU30xUcRw7G8STGryTifyp'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444703 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444715 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444728 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16', 'scsi-SQEMU_QEMU_HARDDISK_742a5747-f873-4808-a190-7917a84c4500-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444752 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444772 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--727b3796--a5b5--597b--af2a--93b7c6d70a12-osd--block--727b3796--a5b5--597b--af2a--93b7c6d70a12'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-M1FVlv-9D5q-lxd6-Riu1-KasI-Hcdt-5OI0oS', 'scsi-0QEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2', 'scsi-SQEMU_QEMU_HARDDISK_649a7a14-18b6-4e11-8675-ab8fe85002f2'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444785 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444796 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f-osd--block--9692bdf8--7fc8--59c1--a3ba--06351cf9fe0f'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-4X8uJ2-YZWG-qtzE-dAFO-At1b-uENL-AqM0bt', 'scsi-0QEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040', 'scsi-SQEMU_QEMU_HARDDISK_a69d22c4-e927-4699-a327-d057749b4040'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444820 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444836 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2', 'scsi-SQEMU_QEMU_HARDDISK_e49cb3c6-bfd0-4159-abb8-b26259c9fbe2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444855 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-13-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444867 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444878 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.444890 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444901 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444920 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444936 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e', 'dm-uuid-LVM-0fKX4cfymsz5amMcqzBfiCgoZkhUeauNxj0GsySfSMS8VgLgqJt1MG0b7sDje6Kx'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444958 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16', 'scsi-SQEMU_QEMU_HARDDISK_d92083cd-8111-41a2-a6b5-4afbc391d177-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444977 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168', 'dm-uuid-LVM-tu4MSQ7U1BANsHFB4tWHe0vFmhIQTVwpi62cEXVQfQHrQvMXvt2TqyheRXw2ewup'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.444994 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7-osd--block--f9a1ff5a--5f5e--51c3--b436--b4c70a0fd2b7'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-dcfXwQ-o4x6-2xYP-2XW1-PBCd-GWqM-v290he', 'scsi-0QEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70', 'scsi-SQEMU_QEMU_HARDDISK_32515b61-c47f-4019-8995-ef0e516a1d70'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445006 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445025 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--7a586834--03f6--5ee9--b58c--2d4644436c0e-osd--block--7a586834--03f6--5ee9--b58c--2d4644436c0e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mPVnVN-zO8W-fS7j-zylO-aYxp-imbs-kPqbMC', 'scsi-0QEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940', 'scsi-SQEMU_QEMU_HARDDISK_f3f02157-3479-476e-b2a3-c621f2183940'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445037 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445055 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4', 'scsi-SQEMU_QEMU_HARDDISK_00278712-8848-43cc-b367-9df7adc0d1b4'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445067 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445083 | orchestrator | skipping: [testbed-node-4] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-14-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445101 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445113 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.445124 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445136 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445153 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445165 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445190 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['b852d8d2-8460-44aa-8998-23e4f04d73cf']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': 'b852d8d2-8460-44aa-8998-23e4f04d73cf'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['5C78-612A']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': '5C78-612A'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16', 'scsi-SQEMU_QEMU_HARDDISK_d793f249-b859-4211-aee9-7d27fd7330c6-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['09d53dc1-1e03-4286-bbb8-2b1796cf92ec']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '09d53dc1-1e03-4286-bbb8-2b1796cf92ec'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445203 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdb', 'value': {'holders': ['ceph--47a403a8--a225--5ee6--9198--c4852ee3470e-osd--block--47a403a8--a225--5ee6--9198--c4852ee3470e'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-mfSKw9-qlCs-k70n-J14h-tqMO-YAjr-iRt747', 'scsi-0QEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52', 'scsi-SQEMU_QEMU_HARDDISK_9c9fa6f7-5631-4b7c-8490-02f085d70a52'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445223 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdc', 'value': {'holders': ['ceph--a661e8c0--0419--5fc2--afc1--c6737c299168-osd--block--a661e8c0--0419--5fc2--afc1--c6737c299168'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-TTS66v-0VlV-Zuar-0Fk8-NhPj-NrRf-nhrckx', 'scsi-0QEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836', 'scsi-SQEMU_QEMU_HARDDISK_56fd191f-3e0c-491f-8cd9-aabd31cc0836'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445239 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64', 'scsi-SQEMU_QEMU_HARDDISK_a9e5fe38-9aa1-47d1-b292-dbaa7924ce64'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445258 | orchestrator | skipping: [testbed-node-5] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'osd_auto_discovery | default(False) | bool', 'item': {'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-09-18-09-55-16-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}}, 'ansible_loop_var': 'item'})  2025-09-18 10:49:00.445270 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.445281 | orchestrator | 2025-09-18 10:49:00.445292 | orchestrator | TASK [ceph-facts : Check if the ceph conf exists] ****************************** 2025-09-18 10:49:00.445303 | orchestrator | Thursday 18 September 2025 10:47:02 +0000 (0:00:00.601) 0:00:17.745 **** 2025-09-18 10:49:00.445314 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.445325 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.445336 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.445347 | orchestrator | 2025-09-18 10:49:00.445358 | orchestrator | TASK [ceph-facts : Set default osd_pool_default_crush_rule fact] *************** 2025-09-18 10:49:00.445369 | orchestrator | Thursday 18 September 2025 10:47:03 +0000 (0:00:00.682) 0:00:18.428 **** 2025-09-18 10:49:00.445380 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.445398 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.445409 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.445419 | orchestrator | 2025-09-18 10:49:00.445430 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 10:49:00.445441 | orchestrator | Thursday 18 September 2025 10:47:03 +0000 (0:00:00.494) 0:00:18.922 **** 2025-09-18 10:49:00.445452 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.445463 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.445474 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.445484 | orchestrator | 2025-09-18 10:49:00.445495 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 10:49:00.445506 | orchestrator | Thursday 18 September 2025 10:47:05 +0000 (0:00:01.596) 0:00:20.519 **** 2025-09-18 10:49:00.445517 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.445546 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.445558 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.445568 | orchestrator | 2025-09-18 10:49:00.445579 | orchestrator | TASK [ceph-facts : Read osd pool default crush rule] *************************** 2025-09-18 10:49:00.445590 | orchestrator | Thursday 18 September 2025 10:47:05 +0000 (0:00:00.334) 0:00:20.853 **** 2025-09-18 10:49:00.445601 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.445612 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.445623 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.445634 | orchestrator | 2025-09-18 10:49:00.445645 | orchestrator | TASK [ceph-facts : Set osd_pool_default_crush_rule fact] *********************** 2025-09-18 10:49:00.445656 | orchestrator | Thursday 18 September 2025 10:47:06 +0000 (0:00:00.421) 0:00:21.275 **** 2025-09-18 10:49:00.445666 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.445677 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.445688 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.445699 | orchestrator | 2025-09-18 10:49:00.445710 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv4] ************************* 2025-09-18 10:49:00.445721 | orchestrator | Thursday 18 September 2025 10:47:06 +0000 (0:00:00.544) 0:00:21.819 **** 2025-09-18 10:49:00.445732 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-09-18 10:49:00.445743 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-09-18 10:49:00.445754 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-09-18 10:49:00.445765 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-09-18 10:49:00.445776 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-09-18 10:49:00.445787 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-09-18 10:49:00.445798 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-09-18 10:49:00.445809 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-09-18 10:49:00.445819 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-09-18 10:49:00.445830 | orchestrator | 2025-09-18 10:49:00.445841 | orchestrator | TASK [ceph-facts : Set_fact _monitor_addresses - ipv6] ************************* 2025-09-18 10:49:00.445852 | orchestrator | Thursday 18 September 2025 10:47:07 +0000 (0:00:00.841) 0:00:22.661 **** 2025-09-18 10:49:00.445863 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-09-18 10:49:00.445874 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-09-18 10:49:00.445885 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-09-18 10:49:00.445896 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.445907 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-09-18 10:49:00.445917 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-09-18 10:49:00.445928 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-09-18 10:49:00.445938 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.445949 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-09-18 10:49:00.445964 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-09-18 10:49:00.445983 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-09-18 10:49:00.445994 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.446005 | orchestrator | 2025-09-18 10:49:00.446043 | orchestrator | TASK [ceph-facts : Import_tasks set_radosgw_address.yml] *********************** 2025-09-18 10:49:00.446057 | orchestrator | Thursday 18 September 2025 10:47:07 +0000 (0:00:00.364) 0:00:23.026 **** 2025-09-18 10:49:00.446069 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:49:00.446080 | orchestrator | 2025-09-18 10:49:00.446091 | orchestrator | TASK [ceph-facts : Set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-09-18 10:49:00.446103 | orchestrator | Thursday 18 September 2025 10:47:08 +0000 (0:00:00.725) 0:00:23.751 **** 2025-09-18 10:49:00.446114 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446126 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.446137 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.446148 | orchestrator | 2025-09-18 10:49:00.446166 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-09-18 10:49:00.446178 | orchestrator | Thursday 18 September 2025 10:47:08 +0000 (0:00:00.317) 0:00:24.069 **** 2025-09-18 10:49:00.446189 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446200 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.446211 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.446222 | orchestrator | 2025-09-18 10:49:00.446232 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-09-18 10:49:00.446244 | orchestrator | Thursday 18 September 2025 10:47:09 +0000 (0:00:00.315) 0:00:24.385 **** 2025-09-18 10:49:00.446254 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446265 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.446276 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:49:00.446287 | orchestrator | 2025-09-18 10:49:00.446298 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_address] *************** 2025-09-18 10:49:00.446309 | orchestrator | Thursday 18 September 2025 10:47:09 +0000 (0:00:00.328) 0:00:24.714 **** 2025-09-18 10:49:00.446320 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.446331 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.446342 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.446353 | orchestrator | 2025-09-18 10:49:00.446364 | orchestrator | TASK [ceph-facts : Set_fact _interface] **************************************** 2025-09-18 10:49:00.446375 | orchestrator | Thursday 18 September 2025 10:47:10 +0000 (0:00:00.646) 0:00:25.360 **** 2025-09-18 10:49:00.446386 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:49:00.446397 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:49:00.446408 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:49:00.446419 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446430 | orchestrator | 2025-09-18 10:49:00.446441 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-09-18 10:49:00.446452 | orchestrator | Thursday 18 September 2025 10:47:10 +0000 (0:00:00.409) 0:00:25.770 **** 2025-09-18 10:49:00.446463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:49:00.446474 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:49:00.446485 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:49:00.446496 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446506 | orchestrator | 2025-09-18 10:49:00.446518 | orchestrator | TASK [ceph-facts : Set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-09-18 10:49:00.446542 | orchestrator | Thursday 18 September 2025 10:47:10 +0000 (0:00:00.365) 0:00:26.135 **** 2025-09-18 10:49:00.446553 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-09-18 10:49:00.446565 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-09-18 10:49:00.446576 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-09-18 10:49:00.446597 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.446608 | orchestrator | 2025-09-18 10:49:00.446619 | orchestrator | TASK [ceph-facts : Reset rgw_instances (workaround)] *************************** 2025-09-18 10:49:00.446630 | orchestrator | Thursday 18 September 2025 10:47:11 +0000 (0:00:00.377) 0:00:26.512 **** 2025-09-18 10:49:00.446641 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:49:00.446652 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:49:00.446663 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:49:00.446675 | orchestrator | 2025-09-18 10:49:00.446686 | orchestrator | TASK [ceph-facts : Set_fact rgw_instances] ************************************* 2025-09-18 10:49:00.446697 | orchestrator | Thursday 18 September 2025 10:47:11 +0000 (0:00:00.338) 0:00:26.851 **** 2025-09-18 10:49:00.446708 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-09-18 10:49:00.446719 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-09-18 10:49:00.446731 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-09-18 10:49:00.446742 | orchestrator | 2025-09-18 10:49:00.446753 | orchestrator | TASK [ceph-facts : Set_fact ceph_run_cmd] ************************************** 2025-09-18 10:49:00.446764 | orchestrator | Thursday 18 September 2025 10:47:12 +0000 (0:00:00.546) 0:00:27.398 **** 2025-09-18 10:49:00.446775 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:49:00.446786 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:49:00.446797 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:49:00.446808 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 10:49:00.446819 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 10:49:00.446831 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 10:49:00.446842 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 10:49:00.446853 | orchestrator | 2025-09-18 10:49:00.446869 | orchestrator | TASK [ceph-facts : Set_fact ceph_admin_command] ******************************** 2025-09-18 10:49:00.446880 | orchestrator | Thursday 18 September 2025 10:47:13 +0000 (0:00:00.957) 0:00:28.355 **** 2025-09-18 10:49:00.446891 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-09-18 10:49:00.446902 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-09-18 10:49:00.446913 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-09-18 10:49:00.446924 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-09-18 10:49:00.446935 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-09-18 10:49:00.446947 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-09-18 10:49:00.446958 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-09-18 10:49:00.446969 | orchestrator | 2025-09-18 10:49:00.446985 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-09-18 10:49:00.446997 | orchestrator | Thursday 18 September 2025 10:47:14 +0000 (0:00:01.701) 0:00:30.057 **** 2025-09-18 10:49:00.447008 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:49:00.447019 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:49:00.447030 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-09-18 10:49:00.447042 | orchestrator | 2025-09-18 10:49:00.447053 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-09-18 10:49:00.447064 | orchestrator | Thursday 18 September 2025 10:47:15 +0000 (0:00:00.387) 0:00:30.444 **** 2025-09-18 10:49:00.447075 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:49:00.447093 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:49:00.447105 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:49:00.447116 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:49:00.447128 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-09-18 10:49:00.447139 | orchestrator | 2025-09-18 10:49:00.447150 | orchestrator | TASK [generate keys] *********************************************************** 2025-09-18 10:49:00.447162 | orchestrator | Thursday 18 September 2025 10:48:02 +0000 (0:00:47.262) 0:01:17.706 **** 2025-09-18 10:49:00.447172 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447183 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447194 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447205 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447215 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447226 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447237 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-09-18 10:49:00.447248 | orchestrator | 2025-09-18 10:49:00.447258 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-09-18 10:49:00.447269 | orchestrator | Thursday 18 September 2025 10:48:27 +0000 (0:00:24.705) 0:01:42.411 **** 2025-09-18 10:49:00.447280 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447291 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447301 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447312 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447323 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447333 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447349 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-09-18 10:49:00.447360 | orchestrator | 2025-09-18 10:49:00.447371 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-09-18 10:49:00.447381 | orchestrator | Thursday 18 September 2025 10:48:39 +0000 (0:00:12.285) 0:01:54.696 **** 2025-09-18 10:49:00.447392 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447403 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447414 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447425 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447442 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447453 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447471 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447482 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447493 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447504 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447515 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447568 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447580 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447591 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447602 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447613 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-09-18 10:49:00.447624 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-09-18 10:49:00.447635 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-09-18 10:49:00.447646 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-09-18 10:49:00.447657 | orchestrator | 2025-09-18 10:49:00.447668 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:49:00.447679 | orchestrator | testbed-node-3 : ok=25  changed=0 unreachable=0 failed=0 skipped=28  rescued=0 ignored=0 2025-09-18 10:49:00.447691 | orchestrator | testbed-node-4 : ok=18  changed=0 unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-09-18 10:49:00.447702 | orchestrator | testbed-node-5 : ok=23  changed=3  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 10:49:00.447713 | orchestrator | 2025-09-18 10:49:00.447723 | orchestrator | 2025-09-18 10:49:00.447734 | orchestrator | 2025-09-18 10:49:00.447745 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:49:00.447756 | orchestrator | Thursday 18 September 2025 10:48:58 +0000 (0:00:18.719) 0:02:13.416 **** 2025-09-18 10:49:00.447766 | orchestrator | =============================================================================== 2025-09-18 10:49:00.447775 | orchestrator | create openstack pool(s) ----------------------------------------------- 47.26s 2025-09-18 10:49:00.447785 | orchestrator | generate keys ---------------------------------------------------------- 24.71s 2025-09-18 10:49:00.447795 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.72s 2025-09-18 10:49:00.447804 | orchestrator | get keys from monitors ------------------------------------------------- 12.29s 2025-09-18 10:49:00.447814 | orchestrator | ceph-facts : Find a running mon container ------------------------------- 2.14s 2025-09-18 10:49:00.447823 | orchestrator | ceph-facts : Set_fact ceph_admin_command -------------------------------- 1.70s 2025-09-18 10:49:00.447833 | orchestrator | ceph-facts : Get current fsid if cluster is already running ------------- 1.69s 2025-09-18 10:49:00.447843 | orchestrator | ceph-facts : Read osd pool default crush rule --------------------------- 1.60s 2025-09-18 10:49:00.447852 | orchestrator | ceph-facts : Set_fact ceph_run_cmd -------------------------------------- 0.96s 2025-09-18 10:49:00.447862 | orchestrator | ceph-facts : Set_fact _monitor_addresses - ipv4 ------------------------- 0.84s 2025-09-18 10:49:00.447872 | orchestrator | ceph-facts : Check if podman binary is present -------------------------- 0.83s 2025-09-18 10:49:00.447881 | orchestrator | ceph-facts : Check if the ceph mon socket is in-use --------------------- 0.82s 2025-09-18 10:49:00.447898 | orchestrator | ceph-facts : Import_tasks set_radosgw_address.yml ----------------------- 0.73s 2025-09-18 10:49:00.447907 | orchestrator | ceph-facts : Check if the ceph conf exists ------------------------------ 0.68s 2025-09-18 10:49:00.447917 | orchestrator | ceph-facts : Set_fact monitor_name ansible_facts['hostname'] ------------ 0.68s 2025-09-18 10:49:00.447927 | orchestrator | ceph-facts : Check if it is atomic host --------------------------------- 0.66s 2025-09-18 10:49:00.447936 | orchestrator | ceph-facts : Include facts.yml ------------------------------------------ 0.65s 2025-09-18 10:49:00.447946 | orchestrator | ceph-facts : Set_fact _radosgw_address to radosgw_address --------------- 0.65s 2025-09-18 10:49:00.447956 | orchestrator | ceph-facts : Collect existed devices ------------------------------------ 0.64s 2025-09-18 10:49:00.447971 | orchestrator | ceph-facts : Set_fact devices generate device list when osd_auto_discovery --- 0.60s 2025-09-18 10:49:00.447981 | orchestrator | 2025-09-18 10:49:00 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:00.447991 | orchestrator | 2025-09-18 10:49:00 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:00.448001 | orchestrator | 2025-09-18 10:49:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:03.489753 | orchestrator | 2025-09-18 10:49:03 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:03.490888 | orchestrator | 2025-09-18 10:49:03 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:03.492275 | orchestrator | 2025-09-18 10:49:03 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:03.492641 | orchestrator | 2025-09-18 10:49:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:06.529733 | orchestrator | 2025-09-18 10:49:06 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:06.533425 | orchestrator | 2025-09-18 10:49:06 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:06.535616 | orchestrator | 2025-09-18 10:49:06 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:06.535642 | orchestrator | 2025-09-18 10:49:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:09.590462 | orchestrator | 2025-09-18 10:49:09 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:09.591714 | orchestrator | 2025-09-18 10:49:09 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:09.593796 | orchestrator | 2025-09-18 10:49:09 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:09.594179 | orchestrator | 2025-09-18 10:49:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:12.653463 | orchestrator | 2025-09-18 10:49:12 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:12.656486 | orchestrator | 2025-09-18 10:49:12 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:12.659606 | orchestrator | 2025-09-18 10:49:12 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:12.659648 | orchestrator | 2025-09-18 10:49:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:15.715863 | orchestrator | 2025-09-18 10:49:15 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:15.717244 | orchestrator | 2025-09-18 10:49:15 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:15.719290 | orchestrator | 2025-09-18 10:49:15 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:15.719314 | orchestrator | 2025-09-18 10:49:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:18.769587 | orchestrator | 2025-09-18 10:49:18 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:18.771598 | orchestrator | 2025-09-18 10:49:18 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:18.774178 | orchestrator | 2025-09-18 10:49:18 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:18.774574 | orchestrator | 2025-09-18 10:49:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:21.815464 | orchestrator | 2025-09-18 10:49:21 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:21.815610 | orchestrator | 2025-09-18 10:49:21 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:21.815626 | orchestrator | 2025-09-18 10:49:21 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:21.815637 | orchestrator | 2025-09-18 10:49:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:24.862109 | orchestrator | 2025-09-18 10:49:24 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state STARTED 2025-09-18 10:49:24.862694 | orchestrator | 2025-09-18 10:49:24 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:24.863632 | orchestrator | 2025-09-18 10:49:24 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:24.863707 | orchestrator | 2025-09-18 10:49:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:27.904963 | orchestrator | 2025-09-18 10:49:27 | INFO  | Task f8c3d6c7-793b-4ca8-ba2a-f26d0df67d2e is in state SUCCESS 2025-09-18 10:49:27.907363 | orchestrator | 2025-09-18 10:49:27 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:27.910691 | orchestrator | 2025-09-18 10:49:27 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:27.910735 | orchestrator | 2025-09-18 10:49:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:30.964677 | orchestrator | 2025-09-18 10:49:30 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:30.965281 | orchestrator | 2025-09-18 10:49:30 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:30.966661 | orchestrator | 2025-09-18 10:49:30 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:30.967093 | orchestrator | 2025-09-18 10:49:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:33.998177 | orchestrator | 2025-09-18 10:49:33 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state STARTED 2025-09-18 10:49:33.999283 | orchestrator | 2025-09-18 10:49:34 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:34.001350 | orchestrator | 2025-09-18 10:49:34 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:34.001428 | orchestrator | 2025-09-18 10:49:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:37.049453 | orchestrator | 2025-09-18 10:49:37 | INFO  | Task 7060df50-4c7b-4105-9863-3509b04fc415 is in state SUCCESS 2025-09-18 10:49:37.051182 | orchestrator | 2025-09-18 10:49:37.051222 | orchestrator | 2025-09-18 10:49:37.051235 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-09-18 10:49:37.051247 | orchestrator | 2025-09-18 10:49:37.051259 | orchestrator | TASK [Fetch all ceph keys] ***************************************************** 2025-09-18 10:49:37.051270 | orchestrator | Thursday 18 September 2025 10:49:02 +0000 (0:00:00.118) 0:00:00.118 **** 2025-09-18 10:49:37.051303 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.admin.keyring) 2025-09-18 10:49:37.051316 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051327 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051339 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:49:37.051350 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051361 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.nova.keyring) 2025-09-18 10:49:37.051372 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.glance.keyring) 2025-09-18 10:49:37.051382 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.gnocchi.keyring) 2025-09-18 10:49:37.051393 | orchestrator | ok: [testbed-manager -> testbed-node-0(192.168.16.10)] => (item=ceph.client.manila.keyring) 2025-09-18 10:49:37.051404 | orchestrator | 2025-09-18 10:49:37.051415 | orchestrator | TASK [Create share directory] ************************************************** 2025-09-18 10:49:37.051427 | orchestrator | Thursday 18 September 2025 10:49:06 +0000 (0:00:04.278) 0:00:04.396 **** 2025-09-18 10:49:37.051438 | orchestrator | changed: [testbed-manager -> localhost] 2025-09-18 10:49:37.051450 | orchestrator | 2025-09-18 10:49:37.051461 | orchestrator | TASK [Write ceph keys to the share directory] ********************************** 2025-09-18 10:49:37.051472 | orchestrator | Thursday 18 September 2025 10:49:07 +0000 (0:00:00.937) 0:00:05.334 **** 2025-09-18 10:49:37.051483 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-09-18 10:49:37.051522 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051534 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051545 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:49:37.051555 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051566 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-09-18 10:49:37.051577 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-09-18 10:49:37.051588 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-09-18 10:49:37.051599 | orchestrator | changed: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-09-18 10:49:37.051609 | orchestrator | 2025-09-18 10:49:37.051621 | orchestrator | TASK [Write ceph keys to the configuration directory] ************************** 2025-09-18 10:49:37.051631 | orchestrator | Thursday 18 September 2025 10:49:20 +0000 (0:00:12.945) 0:00:18.279 **** 2025-09-18 10:49:37.051644 | orchestrator | changed: [testbed-manager] => (item=ceph.client.admin.keyring) 2025-09-18 10:49:37.051970 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.051990 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.052002 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:49:37.052013 | orchestrator | changed: [testbed-manager] => (item=ceph.client.cinder.keyring) 2025-09-18 10:49:37.052024 | orchestrator | changed: [testbed-manager] => (item=ceph.client.nova.keyring) 2025-09-18 10:49:37.052035 | orchestrator | changed: [testbed-manager] => (item=ceph.client.glance.keyring) 2025-09-18 10:49:37.052046 | orchestrator | changed: [testbed-manager] => (item=ceph.client.gnocchi.keyring) 2025-09-18 10:49:37.052057 | orchestrator | changed: [testbed-manager] => (item=ceph.client.manila.keyring) 2025-09-18 10:49:37.052068 | orchestrator | 2025-09-18 10:49:37.052079 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:49:37.052102 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:49:37.052114 | orchestrator | 2025-09-18 10:49:37.052125 | orchestrator | 2025-09-18 10:49:37.052136 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:49:37.052146 | orchestrator | Thursday 18 September 2025 10:49:27 +0000 (0:00:06.621) 0:00:24.900 **** 2025-09-18 10:49:37.052157 | orchestrator | =============================================================================== 2025-09-18 10:49:37.052168 | orchestrator | Write ceph keys to the share directory --------------------------------- 12.95s 2025-09-18 10:49:37.052179 | orchestrator | Write ceph keys to the configuration directory -------------------------- 6.62s 2025-09-18 10:49:37.052189 | orchestrator | Fetch all ceph keys ----------------------------------------------------- 4.28s 2025-09-18 10:49:37.052200 | orchestrator | Create share directory -------------------------------------------------- 0.94s 2025-09-18 10:49:37.052211 | orchestrator | 2025-09-18 10:49:37.052222 | orchestrator | 2025-09-18 10:49:37.052233 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:49:37.052244 | orchestrator | 2025-09-18 10:49:37.052266 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:49:37.052278 | orchestrator | Thursday 18 September 2025 10:47:44 +0000 (0:00:00.308) 0:00:00.308 **** 2025-09-18 10:49:37.052288 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.052299 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.052310 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.052321 | orchestrator | 2025-09-18 10:49:37.052332 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:49:37.052343 | orchestrator | Thursday 18 September 2025 10:47:45 +0000 (0:00:00.300) 0:00:00.609 **** 2025-09-18 10:49:37.052354 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-09-18 10:49:37.052365 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-09-18 10:49:37.052376 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-09-18 10:49:37.052387 | orchestrator | 2025-09-18 10:49:37.052398 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-09-18 10:49:37.052409 | orchestrator | 2025-09-18 10:49:37.052420 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 10:49:37.052430 | orchestrator | Thursday 18 September 2025 10:47:45 +0000 (0:00:00.447) 0:00:01.056 **** 2025-09-18 10:49:37.052441 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:49:37.052452 | orchestrator | 2025-09-18 10:49:37.052463 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-09-18 10:49:37.052474 | orchestrator | Thursday 18 September 2025 10:47:46 +0000 (0:00:00.584) 0:00:01.640 **** 2025-09-18 10:49:37.052540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.052599 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.052642 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.052679 | orchestrator | 2025-09-18 10:49:37.052700 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-09-18 10:49:37.052720 | orchestrator | Thursday 18 September 2025 10:47:47 +0000 (0:00:01.109) 0:00:02.749 **** 2025-09-18 10:49:37.052740 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.052876 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.052891 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.052904 | orchestrator | 2025-09-18 10:49:37.052916 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 10:49:37.052936 | orchestrator | Thursday 18 September 2025 10:47:47 +0000 (0:00:00.493) 0:00:03.242 **** 2025-09-18 10:49:37.052954 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 10:49:37.052972 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 10:49:37.053011 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 10:49:37.053033 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 10:49:37.053053 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 10:49:37.053071 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 10:49:37.053087 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-09-18 10:49:37.053098 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 10:49:37.053109 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 10:49:37.053120 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 10:49:37.053130 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 10:49:37.053141 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 10:49:37.053152 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 10:49:37.053163 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 10:49:37.053173 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-09-18 10:49:37.053184 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 10:49:37.053195 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-09-18 10:49:37.053205 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'heat', 'enabled': 'no'})  2025-09-18 10:49:37.053227 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-09-18 10:49:37.053238 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-09-18 10:49:37.053249 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-09-18 10:49:37.053260 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-09-18 10:49:37.053270 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-09-18 10:49:37.053281 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-09-18 10:49:37.053293 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-09-18 10:49:37.053305 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-09-18 10:49:37.053316 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-09-18 10:49:37.053327 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-09-18 10:49:37.053345 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-09-18 10:49:37.053356 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-09-18 10:49:37.053367 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-09-18 10:49:37.053378 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-09-18 10:49:37.053389 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-09-18 10:49:37.053400 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-09-18 10:49:37.053411 | orchestrator | 2025-09-18 10:49:37.053422 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.053433 | orchestrator | Thursday 18 September 2025 10:47:48 +0000 (0:00:00.704) 0:00:03.946 **** 2025-09-18 10:49:37.053444 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.053455 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.053466 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.053477 | orchestrator | 2025-09-18 10:49:37.053518 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.053532 | orchestrator | Thursday 18 September 2025 10:47:48 +0000 (0:00:00.300) 0:00:04.247 **** 2025-09-18 10:49:37.053543 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053554 | orchestrator | 2025-09-18 10:49:37.053565 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.053582 | orchestrator | Thursday 18 September 2025 10:47:48 +0000 (0:00:00.126) 0:00:04.373 **** 2025-09-18 10:49:37.053594 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053605 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.053616 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.053627 | orchestrator | 2025-09-18 10:49:37.053638 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.053649 | orchestrator | Thursday 18 September 2025 10:47:49 +0000 (0:00:00.476) 0:00:04.850 **** 2025-09-18 10:49:37.053667 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.053678 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.053689 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.053700 | orchestrator | 2025-09-18 10:49:37.053711 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.053722 | orchestrator | Thursday 18 September 2025 10:47:49 +0000 (0:00:00.305) 0:00:05.156 **** 2025-09-18 10:49:37.053733 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053744 | orchestrator | 2025-09-18 10:49:37.053755 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.053766 | orchestrator | Thursday 18 September 2025 10:47:49 +0000 (0:00:00.121) 0:00:05.277 **** 2025-09-18 10:49:37.053777 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053788 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.053799 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.053810 | orchestrator | 2025-09-18 10:49:37.053821 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.053832 | orchestrator | Thursday 18 September 2025 10:47:50 +0000 (0:00:00.296) 0:00:05.574 **** 2025-09-18 10:49:37.053843 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.053854 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.053865 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.053876 | orchestrator | 2025-09-18 10:49:37.053887 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.053898 | orchestrator | Thursday 18 September 2025 10:47:50 +0000 (0:00:00.306) 0:00:05.880 **** 2025-09-18 10:49:37.053909 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053920 | orchestrator | 2025-09-18 10:49:37.053931 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.053942 | orchestrator | Thursday 18 September 2025 10:47:50 +0000 (0:00:00.155) 0:00:06.036 **** 2025-09-18 10:49:37.053953 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.053964 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.053975 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.053986 | orchestrator | 2025-09-18 10:49:37.053997 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.054008 | orchestrator | Thursday 18 September 2025 10:47:51 +0000 (0:00:00.549) 0:00:06.585 **** 2025-09-18 10:49:37.054077 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.054098 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.054117 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.054137 | orchestrator | 2025-09-18 10:49:37.054158 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.054177 | orchestrator | Thursday 18 September 2025 10:47:51 +0000 (0:00:00.309) 0:00:06.894 **** 2025-09-18 10:49:37.054192 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054203 | orchestrator | 2025-09-18 10:49:37.054214 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.054224 | orchestrator | Thursday 18 September 2025 10:47:51 +0000 (0:00:00.131) 0:00:07.026 **** 2025-09-18 10:49:37.054235 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054246 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.054257 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.054268 | orchestrator | 2025-09-18 10:49:37.054279 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.054290 | orchestrator | Thursday 18 September 2025 10:47:51 +0000 (0:00:00.304) 0:00:07.330 **** 2025-09-18 10:49:37.054300 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.054311 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.054322 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.054333 | orchestrator | 2025-09-18 10:49:37.054350 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.054361 | orchestrator | Thursday 18 September 2025 10:47:52 +0000 (0:00:00.336) 0:00:07.666 **** 2025-09-18 10:49:37.054372 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054391 | orchestrator | 2025-09-18 10:49:37.054402 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.054413 | orchestrator | Thursday 18 September 2025 10:47:52 +0000 (0:00:00.382) 0:00:08.049 **** 2025-09-18 10:49:37.054424 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054434 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.054446 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.054456 | orchestrator | 2025-09-18 10:49:37.054467 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.054478 | orchestrator | Thursday 18 September 2025 10:47:52 +0000 (0:00:00.310) 0:00:08.359 **** 2025-09-18 10:49:37.054540 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.054553 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.054564 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.054575 | orchestrator | 2025-09-18 10:49:37.054586 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.054596 | orchestrator | Thursday 18 September 2025 10:47:53 +0000 (0:00:00.322) 0:00:08.682 **** 2025-09-18 10:49:37.054607 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054627 | orchestrator | 2025-09-18 10:49:37.054646 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.054664 | orchestrator | Thursday 18 September 2025 10:47:53 +0000 (0:00:00.141) 0:00:08.824 **** 2025-09-18 10:49:37.054682 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054699 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.054718 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.054738 | orchestrator | 2025-09-18 10:49:37.054757 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.054772 | orchestrator | Thursday 18 September 2025 10:47:53 +0000 (0:00:00.287) 0:00:09.111 **** 2025-09-18 10:49:37.054783 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.054793 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.054804 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.054815 | orchestrator | 2025-09-18 10:49:37.054835 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.054846 | orchestrator | Thursday 18 September 2025 10:47:54 +0000 (0:00:00.544) 0:00:09.656 **** 2025-09-18 10:49:37.054857 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054868 | orchestrator | 2025-09-18 10:49:37.054879 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.054890 | orchestrator | Thursday 18 September 2025 10:47:54 +0000 (0:00:00.153) 0:00:09.809 **** 2025-09-18 10:49:37.054901 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.054912 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.054923 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.054933 | orchestrator | 2025-09-18 10:49:37.054944 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.054955 | orchestrator | Thursday 18 September 2025 10:47:54 +0000 (0:00:00.282) 0:00:10.092 **** 2025-09-18 10:49:37.054966 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.054977 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.054987 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.054998 | orchestrator | 2025-09-18 10:49:37.055009 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.055020 | orchestrator | Thursday 18 September 2025 10:47:55 +0000 (0:00:00.362) 0:00:10.454 **** 2025-09-18 10:49:37.055031 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055042 | orchestrator | 2025-09-18 10:49:37.055053 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.055062 | orchestrator | Thursday 18 September 2025 10:47:55 +0000 (0:00:00.118) 0:00:10.573 **** 2025-09-18 10:49:37.055072 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055082 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.055091 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.055101 | orchestrator | 2025-09-18 10:49:37.055111 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.055138 | orchestrator | Thursday 18 September 2025 10:47:55 +0000 (0:00:00.315) 0:00:10.889 **** 2025-09-18 10:49:37.055148 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.055157 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.055171 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.055188 | orchestrator | 2025-09-18 10:49:37.055204 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.055225 | orchestrator | Thursday 18 September 2025 10:47:56 +0000 (0:00:00.604) 0:00:11.494 **** 2025-09-18 10:49:37.055246 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055262 | orchestrator | 2025-09-18 10:49:37.055288 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.055305 | orchestrator | Thursday 18 September 2025 10:47:56 +0000 (0:00:00.147) 0:00:11.641 **** 2025-09-18 10:49:37.055323 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055339 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.055356 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.055366 | orchestrator | 2025-09-18 10:49:37.055375 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-09-18 10:49:37.055385 | orchestrator | Thursday 18 September 2025 10:47:56 +0000 (0:00:00.285) 0:00:11.927 **** 2025-09-18 10:49:37.055395 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:49:37.055405 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:49:37.055414 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:49:37.055424 | orchestrator | 2025-09-18 10:49:37.055434 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-09-18 10:49:37.055443 | orchestrator | Thursday 18 September 2025 10:47:56 +0000 (0:00:00.316) 0:00:12.244 **** 2025-09-18 10:49:37.055453 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055462 | orchestrator | 2025-09-18 10:49:37.055472 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-09-18 10:49:37.055482 | orchestrator | Thursday 18 September 2025 10:47:56 +0000 (0:00:00.134) 0:00:12.378 **** 2025-09-18 10:49:37.055515 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055540 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.055551 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.055561 | orchestrator | 2025-09-18 10:49:37.055571 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-09-18 10:49:37.055581 | orchestrator | Thursday 18 September 2025 10:47:57 +0000 (0:00:00.515) 0:00:12.894 **** 2025-09-18 10:49:37.055590 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:49:37.055600 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:49:37.055610 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:49:37.055619 | orchestrator | 2025-09-18 10:49:37.055629 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-09-18 10:49:37.055639 | orchestrator | Thursday 18 September 2025 10:47:59 +0000 (0:00:01.739) 0:00:14.634 **** 2025-09-18 10:49:37.055648 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 10:49:37.055658 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 10:49:37.055667 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-09-18 10:49:37.055677 | orchestrator | 2025-09-18 10:49:37.055687 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-09-18 10:49:37.055696 | orchestrator | Thursday 18 September 2025 10:48:01 +0000 (0:00:01.963) 0:00:16.597 **** 2025-09-18 10:49:37.055706 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 10:49:37.055716 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 10:49:37.055726 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-09-18 10:49:37.055744 | orchestrator | 2025-09-18 10:49:37.055754 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-09-18 10:49:37.055764 | orchestrator | Thursday 18 September 2025 10:48:03 +0000 (0:00:02.185) 0:00:18.782 **** 2025-09-18 10:49:37.055781 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 10:49:37.055791 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 10:49:37.055801 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-09-18 10:49:37.055811 | orchestrator | 2025-09-18 10:49:37.055821 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-09-18 10:49:37.055830 | orchestrator | Thursday 18 September 2025 10:48:05 +0000 (0:00:02.094) 0:00:20.877 **** 2025-09-18 10:49:37.055840 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055850 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.055859 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.055869 | orchestrator | 2025-09-18 10:49:37.055878 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-09-18 10:49:37.055888 | orchestrator | Thursday 18 September 2025 10:48:05 +0000 (0:00:00.323) 0:00:21.200 **** 2025-09-18 10:49:37.055898 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.055907 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.055917 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.055927 | orchestrator | 2025-09-18 10:49:37.055936 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 10:49:37.055946 | orchestrator | Thursday 18 September 2025 10:48:06 +0000 (0:00:00.339) 0:00:21.540 **** 2025-09-18 10:49:37.055955 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:49:37.055965 | orchestrator | 2025-09-18 10:49:37.055975 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-09-18 10:49:37.055984 | orchestrator | Thursday 18 September 2025 10:48:06 +0000 (0:00:00.608) 0:00:22.148 **** 2025-09-18 10:49:37.056001 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056029 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056064 | orchestrator | 2025-09-18 10:49:37.056074 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-09-18 10:49:37.056084 | orchestrator | Thursday 18 September 2025 10:48:08 +0000 (0:00:01.792) 0:00:23.940 **** 2025-09-18 10:49:37.056102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056113 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.056130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056151 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.056162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056172 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.056182 | orchestrator | 2025-09-18 10:49:37.056192 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-09-18 10:49:37.056202 | orchestrator | Thursday 18 September 2025 10:48:09 +0000 (0:00:00.715) 0:00:24.656 **** 2025-09-18 10:49:37.056224 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056240 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.056251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056265 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.056289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-09-18 10:49:37.056308 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.056324 | orchestrator | 2025-09-18 10:49:37.056340 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-09-18 10:49:37.056365 | orchestrator | Thursday 18 September 2025 10:48:10 +0000 (0:00:00.852) 0:00:25.509 **** 2025-09-18 10:49:37.056393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056475 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/horizon:2024.2', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'no', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-09-18 10:49:37.056529 | orchestrator | 2025-09-18 10:49:37.056542 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 10:49:37.056552 | orchestrator | Thursday 18 September 2025 10:48:11 +0000 (0:00:01.911) 0:00:27.420 **** 2025-09-18 10:49:37.056562 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:49:37.056572 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:49:37.056581 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:49:37.056591 | orchestrator | 2025-09-18 10:49:37.056601 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-09-18 10:49:37.056611 | orchestrator | Thursday 18 September 2025 10:48:12 +0000 (0:00:00.300) 0:00:27.721 **** 2025-09-18 10:49:37.056621 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:49:37.056631 | orchestrator | 2025-09-18 10:49:37.056640 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-09-18 10:49:37.056650 | orchestrator | Thursday 18 September 2025 10:48:12 +0000 (0:00:00.523) 0:00:28.244 **** 2025-09-18 10:49:37.056660 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:49:37.056670 | orchestrator | 2025-09-18 10:49:37.056687 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-09-18 10:49:37.056697 | orchestrator | Thursday 18 September 2025 10:48:15 +0000 (0:00:02.331) 0:00:30.576 **** 2025-09-18 10:49:37.056707 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:49:37.056717 | orchestrator | 2025-09-18 10:49:37.056727 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-09-18 10:49:37.056737 | orchestrator | Thursday 18 September 2025 10:48:17 +0000 (0:00:02.804) 0:00:33.380 **** 2025-09-18 10:49:37.056746 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:49:37.056757 | orchestrator | 2025-09-18 10:49:37.056766 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 10:49:37.056776 | orchestrator | Thursday 18 September 2025 10:48:33 +0000 (0:00:15.737) 0:00:49.118 **** 2025-09-18 10:49:37.056786 | orchestrator | 2025-09-18 10:49:37.056796 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 10:49:37.056805 | orchestrator | Thursday 18 September 2025 10:48:33 +0000 (0:00:00.072) 0:00:49.190 **** 2025-09-18 10:49:37.056815 | orchestrator | 2025-09-18 10:49:37.056825 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-09-18 10:49:37.056835 | orchestrator | Thursday 18 September 2025 10:48:33 +0000 (0:00:00.060) 0:00:49.251 **** 2025-09-18 10:49:37.056844 | orchestrator | 2025-09-18 10:49:37.056855 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-09-18 10:49:37.056872 | orchestrator | Thursday 18 September 2025 10:48:33 +0000 (0:00:00.069) 0:00:49.320 **** 2025-09-18 10:49:37.056882 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:49:37.056892 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:49:37.056902 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:49:37.056911 | orchestrator | 2025-09-18 10:49:37.056921 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:49:37.056931 | orchestrator | testbed-node-0 : ok=37  changed=11  unreachable=0 failed=0 skipped=25  rescued=0 ignored=0 2025-09-18 10:49:37.056948 | orchestrator | testbed-node-1 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-18 10:49:37.056958 | orchestrator | testbed-node-2 : ok=34  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=0 2025-09-18 10:49:37.056967 | orchestrator | 2025-09-18 10:49:37.056977 | orchestrator | 2025-09-18 10:49:37.056987 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:49:37.056996 | orchestrator | Thursday 18 September 2025 10:49:34 +0000 (0:01:00.213) 0:01:49.534 **** 2025-09-18 10:49:37.057006 | orchestrator | =============================================================================== 2025-09-18 10:49:37.057016 | orchestrator | horizon : Restart horizon container ------------------------------------ 60.21s 2025-09-18 10:49:37.057025 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 15.74s 2025-09-18 10:49:37.057035 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.80s 2025-09-18 10:49:37.057045 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.33s 2025-09-18 10:49:37.057054 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.19s 2025-09-18 10:49:37.057064 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.09s 2025-09-18 10:49:37.057073 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 1.96s 2025-09-18 10:49:37.057083 | orchestrator | horizon : Deploy horizon container -------------------------------------- 1.91s 2025-09-18 10:49:37.057098 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.79s 2025-09-18 10:49:37.057108 | orchestrator | horizon : Copying over config.json files for services ------------------- 1.74s 2025-09-18 10:49:37.057117 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.11s 2025-09-18 10:49:37.057127 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 0.85s 2025-09-18 10:49:37.057136 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 0.72s 2025-09-18 10:49:37.057146 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.70s 2025-09-18 10:49:37.057156 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.61s 2025-09-18 10:49:37.057165 | orchestrator | horizon : Update policy file name --------------------------------------- 0.60s 2025-09-18 10:49:37.057175 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.58s 2025-09-18 10:49:37.057185 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-09-18 10:49:37.057194 | orchestrator | horizon : Update policy file name --------------------------------------- 0.54s 2025-09-18 10:49:37.057204 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.52s 2025-09-18 10:49:37.057213 | orchestrator | 2025-09-18 10:49:37 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:37.057223 | orchestrator | 2025-09-18 10:49:37 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:37.057233 | orchestrator | 2025-09-18 10:49:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:40.093239 | orchestrator | 2025-09-18 10:49:40 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:40.094152 | orchestrator | 2025-09-18 10:49:40 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:40.094254 | orchestrator | 2025-09-18 10:49:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:43.140727 | orchestrator | 2025-09-18 10:49:43 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:43.142111 | orchestrator | 2025-09-18 10:49:43 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:43.142231 | orchestrator | 2025-09-18 10:49:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:46.190649 | orchestrator | 2025-09-18 10:49:46 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:46.192593 | orchestrator | 2025-09-18 10:49:46 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:46.192827 | orchestrator | 2025-09-18 10:49:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:49.239159 | orchestrator | 2025-09-18 10:49:49 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:49.239596 | orchestrator | 2025-09-18 10:49:49 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:49.239624 | orchestrator | 2025-09-18 10:49:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:52.277526 | orchestrator | 2025-09-18 10:49:52 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:52.278182 | orchestrator | 2025-09-18 10:49:52 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:52.278210 | orchestrator | 2025-09-18 10:49:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:55.325238 | orchestrator | 2025-09-18 10:49:55 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:55.327229 | orchestrator | 2025-09-18 10:49:55 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:55.327256 | orchestrator | 2025-09-18 10:49:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:49:58.375522 | orchestrator | 2025-09-18 10:49:58 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:49:58.377689 | orchestrator | 2025-09-18 10:49:58 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:49:58.377806 | orchestrator | 2025-09-18 10:49:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:01.420001 | orchestrator | 2025-09-18 10:50:01 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:01.420834 | orchestrator | 2025-09-18 10:50:01 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:01.420865 | orchestrator | 2025-09-18 10:50:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:04.466239 | orchestrator | 2025-09-18 10:50:04 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:04.468020 | orchestrator | 2025-09-18 10:50:04 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:04.468049 | orchestrator | 2025-09-18 10:50:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:07.516594 | orchestrator | 2025-09-18 10:50:07 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:07.518150 | orchestrator | 2025-09-18 10:50:07 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:07.518345 | orchestrator | 2025-09-18 10:50:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:10.570301 | orchestrator | 2025-09-18 10:50:10 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:10.572222 | orchestrator | 2025-09-18 10:50:10 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:10.572329 | orchestrator | 2025-09-18 10:50:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:13.625232 | orchestrator | 2025-09-18 10:50:13 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:13.628058 | orchestrator | 2025-09-18 10:50:13 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:13.628109 | orchestrator | 2025-09-18 10:50:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:16.684525 | orchestrator | 2025-09-18 10:50:16 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:16.686587 | orchestrator | 2025-09-18 10:50:16 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:16.686624 | orchestrator | 2025-09-18 10:50:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:19.738669 | orchestrator | 2025-09-18 10:50:19 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:19.739923 | orchestrator | 2025-09-18 10:50:19 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:19.740042 | orchestrator | 2025-09-18 10:50:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:22.786264 | orchestrator | 2025-09-18 10:50:22 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state STARTED 2025-09-18 10:50:22.788351 | orchestrator | 2025-09-18 10:50:22 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:22.788380 | orchestrator | 2025-09-18 10:50:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:25.846299 | orchestrator | 2025-09-18 10:50:25 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:25.847034 | orchestrator | 2025-09-18 10:50:25 | INFO  | Task 91f7b7dd-f44f-4430-9495-9f89451ccd89 is in state STARTED 2025-09-18 10:50:25.848887 | orchestrator | 2025-09-18 10:50:25 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:25.851631 | orchestrator | 2025-09-18 10:50:25 | INFO  | Task 2ff035ee-3ef3-4c64-b4ad-6b66aa7e36cf is in state SUCCESS 2025-09-18 10:50:25.853217 | orchestrator | 2025-09-18 10:50:25 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state STARTED 2025-09-18 10:50:25.853249 | orchestrator | 2025-09-18 10:50:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:28.894270 | orchestrator | 2025-09-18 10:50:28 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:28.894535 | orchestrator | 2025-09-18 10:50:28 | INFO  | Task 91f7b7dd-f44f-4430-9495-9f89451ccd89 is in state STARTED 2025-09-18 10:50:28.895042 | orchestrator | 2025-09-18 10:50:28 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:28.896787 | orchestrator | 2025-09-18 10:50:28 | INFO  | Task 2259dea3-f9f0-433c-8191-0bc304b7331c is in state SUCCESS 2025-09-18 10:50:28.900256 | orchestrator | 2025-09-18 10:50:28.900298 | orchestrator | 2025-09-18 10:50:28.900311 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-09-18 10:50:28.900323 | orchestrator | 2025-09-18 10:50:28.900335 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-09-18 10:50:28.900346 | orchestrator | Thursday 18 September 2025 10:49:31 +0000 (0:00:00.217) 0:00:00.217 **** 2025-09-18 10:50:28.900357 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-09-18 10:50:28.900370 | orchestrator | 2025-09-18 10:50:28.900382 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-09-18 10:50:28.900393 | orchestrator | Thursday 18 September 2025 10:49:31 +0000 (0:00:00.218) 0:00:00.435 **** 2025-09-18 10:50:28.900405 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-09-18 10:50:28.900417 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-09-18 10:50:28.900504 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-09-18 10:50:28.900518 | orchestrator | 2025-09-18 10:50:28.900529 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-09-18 10:50:28.900540 | orchestrator | Thursday 18 September 2025 10:49:32 +0000 (0:00:01.095) 0:00:01.531 **** 2025-09-18 10:50:28.900551 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-09-18 10:50:28.900562 | orchestrator | 2025-09-18 10:50:28.900988 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-09-18 10:50:28.901001 | orchestrator | Thursday 18 September 2025 10:49:33 +0000 (0:00:01.021) 0:00:02.552 **** 2025-09-18 10:50:28.901013 | orchestrator | changed: [testbed-manager] 2025-09-18 10:50:28.901024 | orchestrator | 2025-09-18 10:50:28.901035 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-09-18 10:50:28.901046 | orchestrator | Thursday 18 September 2025 10:49:34 +0000 (0:00:00.923) 0:00:03.475 **** 2025-09-18 10:50:28.901057 | orchestrator | changed: [testbed-manager] 2025-09-18 10:50:28.901068 | orchestrator | 2025-09-18 10:50:28.901079 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-09-18 10:50:28.901090 | orchestrator | Thursday 18 September 2025 10:49:35 +0000 (0:00:00.809) 0:00:04.285 **** 2025-09-18 10:50:28.901101 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-09-18 10:50:28.901112 | orchestrator | ok: [testbed-manager] 2025-09-18 10:50:28.901123 | orchestrator | 2025-09-18 10:50:28.901134 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-09-18 10:50:28.901145 | orchestrator | Thursday 18 September 2025 10:50:12 +0000 (0:00:36.978) 0:00:41.263 **** 2025-09-18 10:50:28.901156 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-09-18 10:50:28.901168 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-09-18 10:50:28.901179 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-09-18 10:50:28.901190 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-09-18 10:50:28.901201 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-09-18 10:50:28.901212 | orchestrator | 2025-09-18 10:50:28.901223 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-09-18 10:50:28.901234 | orchestrator | Thursday 18 September 2025 10:50:16 +0000 (0:00:04.189) 0:00:45.453 **** 2025-09-18 10:50:28.901245 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-09-18 10:50:28.901256 | orchestrator | 2025-09-18 10:50:28.901267 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-09-18 10:50:28.901278 | orchestrator | Thursday 18 September 2025 10:50:17 +0000 (0:00:00.470) 0:00:45.923 **** 2025-09-18 10:50:28.901289 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:50:28.901299 | orchestrator | 2025-09-18 10:50:28.901310 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-09-18 10:50:28.901321 | orchestrator | Thursday 18 September 2025 10:50:17 +0000 (0:00:00.138) 0:00:46.062 **** 2025-09-18 10:50:28.901332 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:50:28.901343 | orchestrator | 2025-09-18 10:50:28.901354 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-09-18 10:50:28.901365 | orchestrator | Thursday 18 September 2025 10:50:17 +0000 (0:00:00.309) 0:00:46.372 **** 2025-09-18 10:50:28.901376 | orchestrator | changed: [testbed-manager] 2025-09-18 10:50:28.901386 | orchestrator | 2025-09-18 10:50:28.901397 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-09-18 10:50:28.901408 | orchestrator | Thursday 18 September 2025 10:50:19 +0000 (0:00:02.103) 0:00:48.475 **** 2025-09-18 10:50:28.901419 | orchestrator | changed: [testbed-manager] 2025-09-18 10:50:28.901431 | orchestrator | 2025-09-18 10:50:28.901466 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-09-18 10:50:28.901478 | orchestrator | Thursday 18 September 2025 10:50:20 +0000 (0:00:00.747) 0:00:49.223 **** 2025-09-18 10:50:28.901501 | orchestrator | changed: [testbed-manager] 2025-09-18 10:50:28.901512 | orchestrator | 2025-09-18 10:50:28.901523 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-09-18 10:50:28.901534 | orchestrator | Thursday 18 September 2025 10:50:21 +0000 (0:00:00.641) 0:00:49.865 **** 2025-09-18 10:50:28.901545 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-09-18 10:50:28.901556 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-09-18 10:50:28.901567 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-09-18 10:50:28.901578 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-09-18 10:50:28.901590 | orchestrator | 2025-09-18 10:50:28.901603 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:50:28.901615 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-09-18 10:50:28.901629 | orchestrator | 2025-09-18 10:50:28.901641 | orchestrator | 2025-09-18 10:50:28.901697 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:50:28.901712 | orchestrator | Thursday 18 September 2025 10:50:22 +0000 (0:00:01.495) 0:00:51.360 **** 2025-09-18 10:50:28.901725 | orchestrator | =============================================================================== 2025-09-18 10:50:28.901737 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 36.98s 2025-09-18 10:50:28.901749 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 4.19s 2025-09-18 10:50:28.901761 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 2.10s 2025-09-18 10:50:28.901773 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.50s 2025-09-18 10:50:28.901785 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.10s 2025-09-18 10:50:28.901797 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.02s 2025-09-18 10:50:28.901809 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.92s 2025-09-18 10:50:28.901830 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 0.81s 2025-09-18 10:50:28.901843 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.75s 2025-09-18 10:50:28.901856 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.64s 2025-09-18 10:50:28.901868 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.47s 2025-09-18 10:50:28.901880 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.31s 2025-09-18 10:50:28.901892 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-09-18 10:50:28.901904 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.14s 2025-09-18 10:50:28.901916 | orchestrator | 2025-09-18 10:50:28.901929 | orchestrator | 2025-09-18 10:50:28.901941 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:50:28.901952 | orchestrator | 2025-09-18 10:50:28.901963 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:50:28.901974 | orchestrator | Thursday 18 September 2025 10:47:44 +0000 (0:00:00.257) 0:00:00.257 **** 2025-09-18 10:50:28.901985 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.901996 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.902007 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.902066 | orchestrator | 2025-09-18 10:50:28.902078 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:50:28.902089 | orchestrator | Thursday 18 September 2025 10:47:45 +0000 (0:00:00.307) 0:00:00.565 **** 2025-09-18 10:50:28.902100 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-18 10:50:28.902111 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-18 10:50:28.902122 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-18 10:50:28.902141 | orchestrator | 2025-09-18 10:50:28.902152 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-09-18 10:50:28.902163 | orchestrator | 2025-09-18 10:50:28.902174 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.902186 | orchestrator | Thursday 18 September 2025 10:47:45 +0000 (0:00:00.443) 0:00:01.008 **** 2025-09-18 10:50:28.902196 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:50:28.902208 | orchestrator | 2025-09-18 10:50:28.902219 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-09-18 10:50:28.902230 | orchestrator | Thursday 18 September 2025 10:47:46 +0000 (0:00:00.560) 0:00:01.569 **** 2025-09-18 10:50:28.902248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902327 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902341 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902361 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902373 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902385 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902423 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902435 | orchestrator | 2025-09-18 10:50:28.902469 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-09-18 10:50:28.902481 | orchestrator | Thursday 18 September 2025 10:47:47 +0000 (0:00:01.686) 0:00:03.256 **** 2025-09-18 10:50:28.902492 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-09-18 10:50:28.902503 | orchestrator | 2025-09-18 10:50:28.902515 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-09-18 10:50:28.902533 | orchestrator | Thursday 18 September 2025 10:47:48 +0000 (0:00:00.907) 0:00:04.163 **** 2025-09-18 10:50:28.902544 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.902555 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.902566 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.902577 | orchestrator | 2025-09-18 10:50:28.902588 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-09-18 10:50:28.902599 | orchestrator | Thursday 18 September 2025 10:47:49 +0000 (0:00:00.460) 0:00:04.624 **** 2025-09-18 10:50:28.902610 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:50:28.902621 | orchestrator | 2025-09-18 10:50:28.902632 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.902642 | orchestrator | Thursday 18 September 2025 10:47:49 +0000 (0:00:00.678) 0:00:05.303 **** 2025-09-18 10:50:28.902653 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:50:28.902665 | orchestrator | 2025-09-18 10:50:28.902676 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-09-18 10:50:28.902686 | orchestrator | Thursday 18 September 2025 10:47:50 +0000 (0:00:00.565) 0:00:05.868 **** 2025-09-18 10:50:28.902699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902721 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902739 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.902758 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902771 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902782 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.902840 | orchestrator | 2025-09-18 10:50:28.902852 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-09-18 10:50:28.902863 | orchestrator | Thursday 18 September 2025 10:47:53 +0000 (0:00:03.283) 0:00:09.151 **** 2025-09-18 10:50:28.902875 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.902887 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.902899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.902911 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.903045 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.903072 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903096 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.903108 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.903120 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.903133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.903155 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.903167 | orchestrator | 2025-09-18 10:50:28.903178 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-09-18 10:50:28.903189 | orchestrator | Thursday 18 September 2025 10:47:54 +0000 (0:00:00.750) 0:00:09.902 **** 2025-09-18 10:50:28.903211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.903251 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903263 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.903275 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.903286 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.903298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.903333 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.903350 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}})  2025-09-18 10:50:28.903363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903375 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-09-18 10:50:28.903386 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.903397 | orchestrator | 2025-09-18 10:50:28.903408 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-09-18 10:50:28.903419 | orchestrator | Thursday 18 September 2025 10:47:55 +0000 (0:00:00.797) 0:00:10.699 **** 2025-09-18 10:50:28.903431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903560 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903600 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903612 | orchestrator | 2025-09-18 10:50:28.903623 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-09-18 10:50:28.903635 | orchestrator | Thursday 18 September 2025 10:47:58 +0000 (0:00:03.508) 0:00:14.208 **** 2025-09-18 10:50:28.903647 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903659 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903671 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903716 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.903729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.903741 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903752 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903772 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.903783 | orchestrator | 2025-09-18 10:50:28.903794 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-09-18 10:50:28.903806 | orchestrator | Thursday 18 September 2025 10:48:04 +0000 (0:00:05.790) 0:00:19.998 **** 2025-09-18 10:50:28.903817 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.903834 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:50:28.903845 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:50:28.903856 | orchestrator | 2025-09-18 10:50:28.903867 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-09-18 10:50:28.903879 | orchestrator | Thursday 18 September 2025 10:48:05 +0000 (0:00:01.445) 0:00:21.444 **** 2025-09-18 10:50:28.903890 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.903901 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.903912 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.903923 | orchestrator | 2025-09-18 10:50:28.903935 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-09-18 10:50:28.903946 | orchestrator | Thursday 18 September 2025 10:48:06 +0000 (0:00:00.578) 0:00:22.022 **** 2025-09-18 10:50:28.903957 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.903968 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.903979 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.903990 | orchestrator | 2025-09-18 10:50:28.904001 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-09-18 10:50:28.904021 | orchestrator | Thursday 18 September 2025 10:48:06 +0000 (0:00:00.315) 0:00:22.338 **** 2025-09-18 10:50:28.904032 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.904043 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.904054 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.904065 | orchestrator | 2025-09-18 10:50:28.904076 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-09-18 10:50:28.904087 | orchestrator | Thursday 18 September 2025 10:48:07 +0000 (0:00:00.560) 0:00:22.898 **** 2025-09-18 10:50:28.904099 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.904111 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.904132 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.904151 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.904169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.904181 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-09-18 10:50:28.904192 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.904211 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.904222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.904234 | orchestrator | 2025-09-18 10:50:28.904245 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.904256 | orchestrator | Thursday 18 September 2025 10:48:09 +0000 (0:00:02.551) 0:00:25.450 **** 2025-09-18 10:50:28.904267 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.904278 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.904290 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.904301 | orchestrator | 2025-09-18 10:50:28.904312 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-09-18 10:50:28.904323 | orchestrator | Thursday 18 September 2025 10:48:10 +0000 (0:00:00.315) 0:00:25.766 **** 2025-09-18 10:50:28.904339 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 10:50:28.904351 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 10:50:28.904363 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-09-18 10:50:28.904374 | orchestrator | 2025-09-18 10:50:28.904385 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-09-18 10:50:28.904396 | orchestrator | Thursday 18 September 2025 10:48:12 +0000 (0:00:01.821) 0:00:27.587 **** 2025-09-18 10:50:28.904407 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:50:28.904418 | orchestrator | 2025-09-18 10:50:28.904429 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-09-18 10:50:28.904459 | orchestrator | Thursday 18 September 2025 10:48:13 +0000 (0:00:00.936) 0:00:28.524 **** 2025-09-18 10:50:28.904472 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.904487 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.904499 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.904510 | orchestrator | 2025-09-18 10:50:28.904521 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-09-18 10:50:28.904532 | orchestrator | Thursday 18 September 2025 10:48:13 +0000 (0:00:00.820) 0:00:29.345 **** 2025-09-18 10:50:28.904543 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:50:28.904554 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 10:50:28.904565 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 10:50:28.904576 | orchestrator | 2025-09-18 10:50:28.904587 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-09-18 10:50:28.904598 | orchestrator | Thursday 18 September 2025 10:48:14 +0000 (0:00:01.040) 0:00:30.386 **** 2025-09-18 10:50:28.904617 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.904628 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.904639 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.904650 | orchestrator | 2025-09-18 10:50:28.904661 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-09-18 10:50:28.904672 | orchestrator | Thursday 18 September 2025 10:48:15 +0000 (0:00:00.342) 0:00:30.728 **** 2025-09-18 10:50:28.904683 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 10:50:28.904694 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 10:50:28.904705 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-09-18 10:50:28.904716 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 10:50:28.904727 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 10:50:28.904738 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-09-18 10:50:28.904749 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 10:50:28.904760 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 10:50:28.904771 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-09-18 10:50:28.904782 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 10:50:28.904793 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 10:50:28.904804 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-09-18 10:50:28.904815 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 10:50:28.904826 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 10:50:28.904837 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-09-18 10:50:28.904848 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 10:50:28.904860 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 10:50:28.904871 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 10:50:28.904882 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 10:50:28.904893 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 10:50:28.904904 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 10:50:28.904915 | orchestrator | 2025-09-18 10:50:28.904926 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-09-18 10:50:28.904937 | orchestrator | Thursday 18 September 2025 10:48:24 +0000 (0:00:09.578) 0:00:40.307 **** 2025-09-18 10:50:28.904947 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 10:50:28.904958 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 10:50:28.904970 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 10:50:28.904986 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 10:50:28.904998 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 10:50:28.905009 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 10:50:28.905025 | orchestrator | 2025-09-18 10:50:28.905036 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-09-18 10:50:28.905047 | orchestrator | Thursday 18 September 2025 10:48:28 +0000 (0:00:03.211) 0:00:43.519 **** 2025-09-18 10:50:28.905063 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.905076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.905089 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone:2024.2', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', '', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance roundrobin']}}}}) 2025-09-18 10:50:28.905101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-ssh:2024.2', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905155 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905167 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905178 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/keystone-fernet:2024.2', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-09-18 10:50:28.905189 | orchestrator | 2025-09-18 10:50:28.905201 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.905212 | orchestrator | Thursday 18 September 2025 10:48:30 +0000 (0:00:02.601) 0:00:46.121 **** 2025-09-18 10:50:28.905223 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.905234 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.905245 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.905256 | orchestrator | 2025-09-18 10:50:28.905267 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-09-18 10:50:28.905278 | orchestrator | Thursday 18 September 2025 10:48:30 +0000 (0:00:00.338) 0:00:46.459 **** 2025-09-18 10:50:28.905289 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905300 | orchestrator | 2025-09-18 10:50:28.905311 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-09-18 10:50:28.905331 | orchestrator | Thursday 18 September 2025 10:48:33 +0000 (0:00:02.255) 0:00:48.715 **** 2025-09-18 10:50:28.905342 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905353 | orchestrator | 2025-09-18 10:50:28.905363 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-09-18 10:50:28.905375 | orchestrator | Thursday 18 September 2025 10:48:35 +0000 (0:00:02.190) 0:00:50.906 **** 2025-09-18 10:50:28.905385 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.905396 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.905407 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.905418 | orchestrator | 2025-09-18 10:50:28.905429 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-09-18 10:50:28.905461 | orchestrator | Thursday 18 September 2025 10:48:36 +0000 (0:00:00.966) 0:00:51.873 **** 2025-09-18 10:50:28.905472 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.905484 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.905494 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.905505 | orchestrator | 2025-09-18 10:50:28.905516 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-09-18 10:50:28.905527 | orchestrator | Thursday 18 September 2025 10:48:37 +0000 (0:00:00.621) 0:00:52.494 **** 2025-09-18 10:50:28.905538 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.905549 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.905560 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.905571 | orchestrator | 2025-09-18 10:50:28.905582 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-09-18 10:50:28.905593 | orchestrator | Thursday 18 September 2025 10:48:37 +0000 (0:00:00.585) 0:00:53.079 **** 2025-09-18 10:50:28.905604 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905615 | orchestrator | 2025-09-18 10:50:28.905625 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-09-18 10:50:28.905642 | orchestrator | Thursday 18 September 2025 10:48:51 +0000 (0:00:14.199) 0:01:07.278 **** 2025-09-18 10:50:28.905653 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905663 | orchestrator | 2025-09-18 10:50:28.905674 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 10:50:28.905685 | orchestrator | Thursday 18 September 2025 10:49:01 +0000 (0:00:09.972) 0:01:17.251 **** 2025-09-18 10:50:28.905696 | orchestrator | 2025-09-18 10:50:28.905707 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 10:50:28.905718 | orchestrator | Thursday 18 September 2025 10:49:01 +0000 (0:00:00.063) 0:01:17.314 **** 2025-09-18 10:50:28.905729 | orchestrator | 2025-09-18 10:50:28.905739 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-09-18 10:50:28.905750 | orchestrator | Thursday 18 September 2025 10:49:01 +0000 (0:00:00.062) 0:01:17.377 **** 2025-09-18 10:50:28.905761 | orchestrator | 2025-09-18 10:50:28.905772 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-09-18 10:50:28.905782 | orchestrator | Thursday 18 September 2025 10:49:01 +0000 (0:00:00.064) 0:01:17.442 **** 2025-09-18 10:50:28.905793 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905804 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:50:28.905815 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:50:28.905826 | orchestrator | 2025-09-18 10:50:28.905837 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-09-18 10:50:28.905848 | orchestrator | Thursday 18 September 2025 10:49:20 +0000 (0:00:18.791) 0:01:36.233 **** 2025-09-18 10:50:28.905858 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905869 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:50:28.905881 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:50:28.905891 | orchestrator | 2025-09-18 10:50:28.905902 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-09-18 10:50:28.905913 | orchestrator | Thursday 18 September 2025 10:49:25 +0000 (0:00:05.090) 0:01:41.324 **** 2025-09-18 10:50:28.905924 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.905942 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:50:28.905953 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:50:28.905964 | orchestrator | 2025-09-18 10:50:28.905974 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.905985 | orchestrator | Thursday 18 September 2025 10:49:38 +0000 (0:00:12.217) 0:01:53.542 **** 2025-09-18 10:50:28.905996 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:50:28.906007 | orchestrator | 2025-09-18 10:50:28.906066 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-09-18 10:50:28.906078 | orchestrator | Thursday 18 September 2025 10:49:38 +0000 (0:00:00.645) 0:01:54.187 **** 2025-09-18 10:50:28.906089 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:50:28.906100 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:50:28.906111 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.906122 | orchestrator | 2025-09-18 10:50:28.906133 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-09-18 10:50:28.906144 | orchestrator | Thursday 18 September 2025 10:49:39 +0000 (0:00:00.808) 0:01:54.995 **** 2025-09-18 10:50:28.906156 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:50:28.906167 | orchestrator | 2025-09-18 10:50:28.906178 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-09-18 10:50:28.906189 | orchestrator | Thursday 18 September 2025 10:49:41 +0000 (0:00:01.725) 0:01:56.721 **** 2025-09-18 10:50:28.906200 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-09-18 10:50:28.906211 | orchestrator | 2025-09-18 10:50:28.906221 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-09-18 10:50:28.906232 | orchestrator | Thursday 18 September 2025 10:49:52 +0000 (0:00:11.053) 0:02:07.775 **** 2025-09-18 10:50:28.906243 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-09-18 10:50:28.906254 | orchestrator | 2025-09-18 10:50:28.906265 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-09-18 10:50:28.906276 | orchestrator | Thursday 18 September 2025 10:50:16 +0000 (0:00:23.806) 0:02:31.582 **** 2025-09-18 10:50:28.906287 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-09-18 10:50:28.906298 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-09-18 10:50:28.906309 | orchestrator | 2025-09-18 10:50:28.906319 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-09-18 10:50:28.906330 | orchestrator | Thursday 18 September 2025 10:50:23 +0000 (0:00:07.092) 0:02:38.675 **** 2025-09-18 10:50:28.906341 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.906352 | orchestrator | 2025-09-18 10:50:28.906363 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-09-18 10:50:28.906374 | orchestrator | Thursday 18 September 2025 10:50:23 +0000 (0:00:00.123) 0:02:38.798 **** 2025-09-18 10:50:28.906385 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.906396 | orchestrator | 2025-09-18 10:50:28.906414 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-09-18 10:50:28.906425 | orchestrator | Thursday 18 September 2025 10:50:23 +0000 (0:00:00.117) 0:02:38.916 **** 2025-09-18 10:50:28.906436 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.906464 | orchestrator | 2025-09-18 10:50:28.906475 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-09-18 10:50:28.906486 | orchestrator | Thursday 18 September 2025 10:50:23 +0000 (0:00:00.122) 0:02:39.038 **** 2025-09-18 10:50:28.906497 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.906508 | orchestrator | 2025-09-18 10:50:28.906519 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-09-18 10:50:28.906530 | orchestrator | Thursday 18 September 2025 10:50:24 +0000 (0:00:00.550) 0:02:39.589 **** 2025-09-18 10:50:28.906541 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:50:28.906560 | orchestrator | 2025-09-18 10:50:28.906571 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-09-18 10:50:28.906582 | orchestrator | Thursday 18 September 2025 10:50:27 +0000 (0:00:03.402) 0:02:42.991 **** 2025-09-18 10:50:28.906598 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:50:28.906609 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:50:28.906620 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:50:28.906631 | orchestrator | 2025-09-18 10:50:28.906642 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:50:28.906653 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-09-18 10:50:28.906665 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-18 10:50:28.906676 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-09-18 10:50:28.906687 | orchestrator | 2025-09-18 10:50:28.906698 | orchestrator | 2025-09-18 10:50:28.906709 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:50:28.906720 | orchestrator | Thursday 18 September 2025 10:50:28 +0000 (0:00:00.554) 0:02:43.546 **** 2025-09-18 10:50:28.906731 | orchestrator | =============================================================================== 2025-09-18 10:50:28.906742 | orchestrator | service-ks-register : keystone | Creating services --------------------- 23.81s 2025-09-18 10:50:28.906752 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 18.79s 2025-09-18 10:50:28.906763 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 14.20s 2025-09-18 10:50:28.906774 | orchestrator | keystone : Restart keystone container ---------------------------------- 12.22s 2025-09-18 10:50:28.906785 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint --- 11.05s 2025-09-18 10:50:28.906796 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.97s 2025-09-18 10:50:28.906808 | orchestrator | keystone : Copying files for keystone-fernet ---------------------------- 9.58s 2025-09-18 10:50:28.906819 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 7.09s 2025-09-18 10:50:28.906830 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 5.79s 2025-09-18 10:50:28.906841 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 5.09s 2025-09-18 10:50:28.906851 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.51s 2025-09-18 10:50:28.906862 | orchestrator | keystone : Creating default user role ----------------------------------- 3.40s 2025-09-18 10:50:28.906873 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.28s 2025-09-18 10:50:28.906884 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.21s 2025-09-18 10:50:28.906894 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.60s 2025-09-18 10:50:28.906905 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.55s 2025-09-18 10:50:28.906916 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.26s 2025-09-18 10:50:28.906927 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.19s 2025-09-18 10:50:28.906938 | orchestrator | keystone : Copying over wsgi-keystone.conf ------------------------------ 1.82s 2025-09-18 10:50:28.906949 | orchestrator | keystone : Run key distribution ----------------------------------------- 1.73s 2025-09-18 10:50:28.906959 | orchestrator | 2025-09-18 10:50:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:31.963370 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:31.963568 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:31.963626 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task 91f7b7dd-f44f-4430-9495-9f89451ccd89 is in state SUCCESS 2025-09-18 10:50:31.963646 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:31.963665 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:31.963685 | orchestrator | 2025-09-18 10:50:31 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:31.963704 | orchestrator | 2025-09-18 10:50:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:34.966417 | orchestrator | 2025-09-18 10:50:34 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:34.966621 | orchestrator | 2025-09-18 10:50:34 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:34.967327 | orchestrator | 2025-09-18 10:50:34 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:34.967948 | orchestrator | 2025-09-18 10:50:34 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:34.968594 | orchestrator | 2025-09-18 10:50:34 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:34.968632 | orchestrator | 2025-09-18 10:50:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:38.004915 | orchestrator | 2025-09-18 10:50:38 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:38.005003 | orchestrator | 2025-09-18 10:50:38 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:38.006542 | orchestrator | 2025-09-18 10:50:38 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:38.006570 | orchestrator | 2025-09-18 10:50:38 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:38.006582 | orchestrator | 2025-09-18 10:50:38 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:38.006594 | orchestrator | 2025-09-18 10:50:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:41.035598 | orchestrator | 2025-09-18 10:50:41 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:41.036093 | orchestrator | 2025-09-18 10:50:41 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:41.036612 | orchestrator | 2025-09-18 10:50:41 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:41.037352 | orchestrator | 2025-09-18 10:50:41 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:41.037989 | orchestrator | 2025-09-18 10:50:41 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:41.038012 | orchestrator | 2025-09-18 10:50:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:44.083880 | orchestrator | 2025-09-18 10:50:44 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:44.085524 | orchestrator | 2025-09-18 10:50:44 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:44.086988 | orchestrator | 2025-09-18 10:50:44 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:44.088704 | orchestrator | 2025-09-18 10:50:44 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:44.089563 | orchestrator | 2025-09-18 10:50:44 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:44.089611 | orchestrator | 2025-09-18 10:50:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:47.132855 | orchestrator | 2025-09-18 10:50:47 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:47.133642 | orchestrator | 2025-09-18 10:50:47 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:47.135751 | orchestrator | 2025-09-18 10:50:47 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:47.137091 | orchestrator | 2025-09-18 10:50:47 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:47.138985 | orchestrator | 2025-09-18 10:50:47 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:47.139068 | orchestrator | 2025-09-18 10:50:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:50.180278 | orchestrator | 2025-09-18 10:50:50 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:50.181932 | orchestrator | 2025-09-18 10:50:50 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:50.183746 | orchestrator | 2025-09-18 10:50:50 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:50.185282 | orchestrator | 2025-09-18 10:50:50 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:50.189145 | orchestrator | 2025-09-18 10:50:50 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:50.189888 | orchestrator | 2025-09-18 10:50:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:53.240398 | orchestrator | 2025-09-18 10:50:53 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:53.240641 | orchestrator | 2025-09-18 10:50:53 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:53.240661 | orchestrator | 2025-09-18 10:50:53 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:53.240685 | orchestrator | 2025-09-18 10:50:53 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:53.241105 | orchestrator | 2025-09-18 10:50:53 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:53.241137 | orchestrator | 2025-09-18 10:50:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:56.318132 | orchestrator | 2025-09-18 10:50:56 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:56.318188 | orchestrator | 2025-09-18 10:50:56 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:56.318198 | orchestrator | 2025-09-18 10:50:56 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:56.318207 | orchestrator | 2025-09-18 10:50:56 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:56.318216 | orchestrator | 2025-09-18 10:50:56 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:56.318225 | orchestrator | 2025-09-18 10:50:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:50:59.310805 | orchestrator | 2025-09-18 10:50:59 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:50:59.311094 | orchestrator | 2025-09-18 10:50:59 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:50:59.311789 | orchestrator | 2025-09-18 10:50:59 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:50:59.312535 | orchestrator | 2025-09-18 10:50:59 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:50:59.313143 | orchestrator | 2025-09-18 10:50:59 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:50:59.313461 | orchestrator | 2025-09-18 10:50:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:02.338615 | orchestrator | 2025-09-18 10:51:02 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:02.338779 | orchestrator | 2025-09-18 10:51:02 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:02.339474 | orchestrator | 2025-09-18 10:51:02 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:02.340207 | orchestrator | 2025-09-18 10:51:02 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:02.340771 | orchestrator | 2025-09-18 10:51:02 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:02.340790 | orchestrator | 2025-09-18 10:51:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:05.373017 | orchestrator | 2025-09-18 10:51:05 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:05.374917 | orchestrator | 2025-09-18 10:51:05 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:05.376802 | orchestrator | 2025-09-18 10:51:05 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:05.378202 | orchestrator | 2025-09-18 10:51:05 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:05.380864 | orchestrator | 2025-09-18 10:51:05 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:05.380885 | orchestrator | 2025-09-18 10:51:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:08.499192 | orchestrator | 2025-09-18 10:51:08 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:08.499277 | orchestrator | 2025-09-18 10:51:08 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:08.499291 | orchestrator | 2025-09-18 10:51:08 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:08.499309 | orchestrator | 2025-09-18 10:51:08 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:08.499328 | orchestrator | 2025-09-18 10:51:08 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:08.499348 | orchestrator | 2025-09-18 10:51:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:11.466814 | orchestrator | 2025-09-18 10:51:11 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:11.467303 | orchestrator | 2025-09-18 10:51:11 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:11.468882 | orchestrator | 2025-09-18 10:51:11 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:11.470374 | orchestrator | 2025-09-18 10:51:11 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:11.471727 | orchestrator | 2025-09-18 10:51:11 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:11.471759 | orchestrator | 2025-09-18 10:51:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:14.519234 | orchestrator | 2025-09-18 10:51:14 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:14.520745 | orchestrator | 2025-09-18 10:51:14 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:14.521382 | orchestrator | 2025-09-18 10:51:14 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:14.521812 | orchestrator | 2025-09-18 10:51:14 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:14.522675 | orchestrator | 2025-09-18 10:51:14 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:14.522699 | orchestrator | 2025-09-18 10:51:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:17.546728 | orchestrator | 2025-09-18 10:51:17 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:17.548395 | orchestrator | 2025-09-18 10:51:17 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:17.548824 | orchestrator | 2025-09-18 10:51:17 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:17.550121 | orchestrator | 2025-09-18 10:51:17 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:17.551974 | orchestrator | 2025-09-18 10:51:17 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:17.551996 | orchestrator | 2025-09-18 10:51:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:20.579581 | orchestrator | 2025-09-18 10:51:20 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:20.580119 | orchestrator | 2025-09-18 10:51:20 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:20.580767 | orchestrator | 2025-09-18 10:51:20 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:20.581562 | orchestrator | 2025-09-18 10:51:20 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:20.582360 | orchestrator | 2025-09-18 10:51:20 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:20.582477 | orchestrator | 2025-09-18 10:51:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:23.617700 | orchestrator | 2025-09-18 10:51:23 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:23.618206 | orchestrator | 2025-09-18 10:51:23 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:23.618952 | orchestrator | 2025-09-18 10:51:23 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:23.619806 | orchestrator | 2025-09-18 10:51:23 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:23.620675 | orchestrator | 2025-09-18 10:51:23 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:23.620698 | orchestrator | 2025-09-18 10:51:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:26.640792 | orchestrator | 2025-09-18 10:51:26 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:26.641035 | orchestrator | 2025-09-18 10:51:26 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:26.642241 | orchestrator | 2025-09-18 10:51:26 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:26.642267 | orchestrator | 2025-09-18 10:51:26 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:26.642890 | orchestrator | 2025-09-18 10:51:26 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:26.642909 | orchestrator | 2025-09-18 10:51:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:29.669753 | orchestrator | 2025-09-18 10:51:29 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:29.670169 | orchestrator | 2025-09-18 10:51:29 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:29.670484 | orchestrator | 2025-09-18 10:51:29 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:29.671000 | orchestrator | 2025-09-18 10:51:29 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:29.671761 | orchestrator | 2025-09-18 10:51:29 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:29.671794 | orchestrator | 2025-09-18 10:51:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:32.710433 | orchestrator | 2025-09-18 10:51:32 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:32.710783 | orchestrator | 2025-09-18 10:51:32 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:32.712100 | orchestrator | 2025-09-18 10:51:32 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:32.712516 | orchestrator | 2025-09-18 10:51:32 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:32.713909 | orchestrator | 2025-09-18 10:51:32 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:32.713997 | orchestrator | 2025-09-18 10:51:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:35.735194 | orchestrator | 2025-09-18 10:51:35 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:35.735634 | orchestrator | 2025-09-18 10:51:35 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:35.736329 | orchestrator | 2025-09-18 10:51:35 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:35.737789 | orchestrator | 2025-09-18 10:51:35 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:35.738412 | orchestrator | 2025-09-18 10:51:35 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:35.738443 | orchestrator | 2025-09-18 10:51:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:38.760884 | orchestrator | 2025-09-18 10:51:38 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:38.760991 | orchestrator | 2025-09-18 10:51:38 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:38.762187 | orchestrator | 2025-09-18 10:51:38 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:38.762212 | orchestrator | 2025-09-18 10:51:38 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:38.762793 | orchestrator | 2025-09-18 10:51:38 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:38.762815 | orchestrator | 2025-09-18 10:51:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:41.785653 | orchestrator | 2025-09-18 10:51:41 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:41.787784 | orchestrator | 2025-09-18 10:51:41 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:41.787828 | orchestrator | 2025-09-18 10:51:41 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:41.787841 | orchestrator | 2025-09-18 10:51:41 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:41.787882 | orchestrator | 2025-09-18 10:51:41 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:41.787894 | orchestrator | 2025-09-18 10:51:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:44.821227 | orchestrator | 2025-09-18 10:51:44 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:44.821466 | orchestrator | 2025-09-18 10:51:44 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:44.822193 | orchestrator | 2025-09-18 10:51:44 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:44.823020 | orchestrator | 2025-09-18 10:51:44 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:44.823972 | orchestrator | 2025-09-18 10:51:44 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:44.824049 | orchestrator | 2025-09-18 10:51:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:47.847190 | orchestrator | 2025-09-18 10:51:47 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:47.847603 | orchestrator | 2025-09-18 10:51:47 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:47.848418 | orchestrator | 2025-09-18 10:51:47 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:47.849627 | orchestrator | 2025-09-18 10:51:47 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:47.850224 | orchestrator | 2025-09-18 10:51:47 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:47.850334 | orchestrator | 2025-09-18 10:51:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:50.890672 | orchestrator | 2025-09-18 10:51:50 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:50.890771 | orchestrator | 2025-09-18 10:51:50 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:50.891403 | orchestrator | 2025-09-18 10:51:50 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:50.891956 | orchestrator | 2025-09-18 10:51:50 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:50.892631 | orchestrator | 2025-09-18 10:51:50 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:50.892662 | orchestrator | 2025-09-18 10:51:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:53.912463 | orchestrator | 2025-09-18 10:51:53 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:53.912826 | orchestrator | 2025-09-18 10:51:53 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:53.915715 | orchestrator | 2025-09-18 10:51:53 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:53.915742 | orchestrator | 2025-09-18 10:51:53 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:53.915754 | orchestrator | 2025-09-18 10:51:53 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:53.915764 | orchestrator | 2025-09-18 10:51:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:56.939417 | orchestrator | 2025-09-18 10:51:56 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:56.939729 | orchestrator | 2025-09-18 10:51:56 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:56.940355 | orchestrator | 2025-09-18 10:51:56 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:56.941071 | orchestrator | 2025-09-18 10:51:56 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:56.942909 | orchestrator | 2025-09-18 10:51:56 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:56.942934 | orchestrator | 2025-09-18 10:51:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:51:59.965513 | orchestrator | 2025-09-18 10:51:59 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:51:59.965900 | orchestrator | 2025-09-18 10:51:59 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:51:59.966525 | orchestrator | 2025-09-18 10:51:59 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:51:59.967221 | orchestrator | 2025-09-18 10:51:59 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:51:59.967919 | orchestrator | 2025-09-18 10:51:59 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:51:59.967941 | orchestrator | 2025-09-18 10:51:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:02.999103 | orchestrator | 2025-09-18 10:52:02 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:02.999452 | orchestrator | 2025-09-18 10:52:03 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:02.999985 | orchestrator | 2025-09-18 10:52:03 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:52:03.000542 | orchestrator | 2025-09-18 10:52:03 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:03.001092 | orchestrator | 2025-09-18 10:52:03 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:03.001117 | orchestrator | 2025-09-18 10:52:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:06.027689 | orchestrator | 2025-09-18 10:52:06 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:06.027783 | orchestrator | 2025-09-18 10:52:06 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:06.028112 | orchestrator | 2025-09-18 10:52:06 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state STARTED 2025-09-18 10:52:06.028647 | orchestrator | 2025-09-18 10:52:06 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:06.029286 | orchestrator | 2025-09-18 10:52:06 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:06.029307 | orchestrator | 2025-09-18 10:52:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:09.053120 | orchestrator | 2025-09-18 10:52:09 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:09.053625 | orchestrator | 2025-09-18 10:52:09 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:09.053918 | orchestrator | 2025-09-18 10:52:09 | INFO  | Task 7e97563f-1e88-40f8-aebe-048d2b4533d4 is in state SUCCESS 2025-09-18 10:52:09.054712 | orchestrator | 2025-09-18 10:52:09 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:09.055320 | orchestrator | 2025-09-18 10:52:09 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:09.055484 | orchestrator | 2025-09-18 10:52:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:12.084088 | orchestrator | 2025-09-18 10:52:12 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:12.087589 | orchestrator | 2025-09-18 10:52:12 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:12.088122 | orchestrator | 2025-09-18 10:52:12 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:12.088745 | orchestrator | 2025-09-18 10:52:12 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:12.088767 | orchestrator | 2025-09-18 10:52:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:15.120121 | orchestrator | 2025-09-18 10:52:15 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:15.120204 | orchestrator | 2025-09-18 10:52:15 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:15.120219 | orchestrator | 2025-09-18 10:52:15 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:15.120231 | orchestrator | 2025-09-18 10:52:15 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:15.120242 | orchestrator | 2025-09-18 10:52:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:18.148507 | orchestrator | 2025-09-18 10:52:18 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:18.148759 | orchestrator | 2025-09-18 10:52:18 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:18.149349 | orchestrator | 2025-09-18 10:52:18 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:18.150055 | orchestrator | 2025-09-18 10:52:18 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:18.150080 | orchestrator | 2025-09-18 10:52:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:21.175840 | orchestrator | 2025-09-18 10:52:21 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:21.175921 | orchestrator | 2025-09-18 10:52:21 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:21.176403 | orchestrator | 2025-09-18 10:52:21 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:21.177119 | orchestrator | 2025-09-18 10:52:21 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:21.177142 | orchestrator | 2025-09-18 10:52:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:24.208698 | orchestrator | 2025-09-18 10:52:24 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:24.216213 | orchestrator | 2025-09-18 10:52:24 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:24.216260 | orchestrator | 2025-09-18 10:52:24 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:24.216281 | orchestrator | 2025-09-18 10:52:24 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:24.216301 | orchestrator | 2025-09-18 10:52:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:27.254517 | orchestrator | 2025-09-18 10:52:27 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:27.255698 | orchestrator | 2025-09-18 10:52:27 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:27.256801 | orchestrator | 2025-09-18 10:52:27 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:27.258298 | orchestrator | 2025-09-18 10:52:27 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:27.258412 | orchestrator | 2025-09-18 10:52:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:30.299172 | orchestrator | 2025-09-18 10:52:30 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:30.300738 | orchestrator | 2025-09-18 10:52:30 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:30.302839 | orchestrator | 2025-09-18 10:52:30 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:30.304159 | orchestrator | 2025-09-18 10:52:30 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:30.304199 | orchestrator | 2025-09-18 10:52:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:33.387068 | orchestrator | 2025-09-18 10:52:33 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:33.387872 | orchestrator | 2025-09-18 10:52:33 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:33.388833 | orchestrator | 2025-09-18 10:52:33 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:33.389757 | orchestrator | 2025-09-18 10:52:33 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:33.389778 | orchestrator | 2025-09-18 10:52:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:36.426099 | orchestrator | 2025-09-18 10:52:36 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:36.426416 | orchestrator | 2025-09-18 10:52:36 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:36.426921 | orchestrator | 2025-09-18 10:52:36 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:36.427485 | orchestrator | 2025-09-18 10:52:36 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:36.427505 | orchestrator | 2025-09-18 10:52:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:39.450949 | orchestrator | 2025-09-18 10:52:39 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:39.451145 | orchestrator | 2025-09-18 10:52:39 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:39.451953 | orchestrator | 2025-09-18 10:52:39 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:39.452622 | orchestrator | 2025-09-18 10:52:39 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:39.452644 | orchestrator | 2025-09-18 10:52:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:42.476850 | orchestrator | 2025-09-18 10:52:42 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:42.476926 | orchestrator | 2025-09-18 10:52:42 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:42.477409 | orchestrator | 2025-09-18 10:52:42 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:42.478300 | orchestrator | 2025-09-18 10:52:42 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:42.478318 | orchestrator | 2025-09-18 10:52:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:45.512032 | orchestrator | 2025-09-18 10:52:45 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:45.513389 | orchestrator | 2025-09-18 10:52:45 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:45.513691 | orchestrator | 2025-09-18 10:52:45 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state STARTED 2025-09-18 10:52:45.515149 | orchestrator | 2025-09-18 10:52:45 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:45.515199 | orchestrator | 2025-09-18 10:52:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:48.537540 | orchestrator | 2025-09-18 10:52:48 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:48.537716 | orchestrator | 2025-09-18 10:52:48 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:48.538649 | orchestrator | 2025-09-18 10:52:48 | INFO  | Task 58a3461b-b3cc-48e6-8cb0-17cd4bcce206 is in state SUCCESS 2025-09-18 10:52:48.540034 | orchestrator | 2025-09-18 10:52:48.540064 | orchestrator | 2025-09-18 10:52:48.540076 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:52:48.540088 | orchestrator | 2025-09-18 10:52:48.540100 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:52:48.540111 | orchestrator | Thursday 18 September 2025 10:50:26 +0000 (0:00:00.178) 0:00:00.178 **** 2025-09-18 10:52:48.540123 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:52:48.540134 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:52:48.540146 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:52:48.540157 | orchestrator | 2025-09-18 10:52:48.540168 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:52:48.540179 | orchestrator | Thursday 18 September 2025 10:50:27 +0000 (0:00:00.303) 0:00:00.481 **** 2025-09-18 10:52:48.540190 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-09-18 10:52:48.540202 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-09-18 10:52:48.540213 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-09-18 10:52:48.540224 | orchestrator | 2025-09-18 10:52:48.540235 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-09-18 10:52:48.540246 | orchestrator | 2025-09-18 10:52:48.540257 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-09-18 10:52:48.540269 | orchestrator | Thursday 18 September 2025 10:50:27 +0000 (0:00:00.789) 0:00:01.271 **** 2025-09-18 10:52:48.540280 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:52:48.540291 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:52:48.540302 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:52:48.540313 | orchestrator | 2025-09-18 10:52:48.540324 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:52:48.540363 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.540376 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.540387 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.540399 | orchestrator | 2025-09-18 10:52:48.540410 | orchestrator | 2025-09-18 10:52:48.540421 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:52:48.540432 | orchestrator | Thursday 18 September 2025 10:50:28 +0000 (0:00:00.783) 0:00:02.055 **** 2025-09-18 10:52:48.540443 | orchestrator | =============================================================================== 2025-09-18 10:52:48.540454 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.79s 2025-09-18 10:52:48.540465 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 0.78s 2025-09-18 10:52:48.540476 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.30s 2025-09-18 10:52:48.540487 | orchestrator | 2025-09-18 10:52:48.540498 | orchestrator | 2025-09-18 10:52:48.540510 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-09-18 10:52:48.540521 | orchestrator | 2025-09-18 10:52:48.540555 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-09-18 10:52:48.540664 | orchestrator | Thursday 18 September 2025 10:50:27 +0000 (0:00:00.268) 0:00:00.268 **** 2025-09-18 10:52:48.540677 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540689 | orchestrator | 2025-09-18 10:52:48.540700 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-09-18 10:52:48.540710 | orchestrator | Thursday 18 September 2025 10:50:29 +0000 (0:00:02.248) 0:00:02.517 **** 2025-09-18 10:52:48.540722 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540733 | orchestrator | 2025-09-18 10:52:48.540745 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-09-18 10:52:48.540756 | orchestrator | Thursday 18 September 2025 10:50:30 +0000 (0:00:00.926) 0:00:03.443 **** 2025-09-18 10:52:48.540766 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540777 | orchestrator | 2025-09-18 10:52:48.540788 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-09-18 10:52:48.540799 | orchestrator | Thursday 18 September 2025 10:50:31 +0000 (0:00:01.662) 0:00:05.105 **** 2025-09-18 10:52:48.540810 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540821 | orchestrator | 2025-09-18 10:52:48.540832 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-09-18 10:52:48.540843 | orchestrator | Thursday 18 September 2025 10:50:33 +0000 (0:00:01.037) 0:00:06.143 **** 2025-09-18 10:52:48.540854 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540865 | orchestrator | 2025-09-18 10:52:48.540875 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-09-18 10:52:48.540886 | orchestrator | Thursday 18 September 2025 10:50:33 +0000 (0:00:00.926) 0:00:07.070 **** 2025-09-18 10:52:48.540897 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540908 | orchestrator | 2025-09-18 10:52:48.540919 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-09-18 10:52:48.540930 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:00.952) 0:00:08.022 **** 2025-09-18 10:52:48.540940 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540951 | orchestrator | 2025-09-18 10:52:48.540962 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-09-18 10:52:48.540973 | orchestrator | Thursday 18 September 2025 10:50:36 +0000 (0:00:01.188) 0:00:09.211 **** 2025-09-18 10:52:48.540984 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.540995 | orchestrator | 2025-09-18 10:52:48.541005 | orchestrator | TASK [Create admin user] ******************************************************* 2025-09-18 10:52:48.541016 | orchestrator | Thursday 18 September 2025 10:50:37 +0000 (0:00:01.183) 0:00:10.394 **** 2025-09-18 10:52:48.541035 | orchestrator | changed: [testbed-manager] 2025-09-18 10:52:48.541046 | orchestrator | 2025-09-18 10:52:48.541057 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-09-18 10:52:48.541068 | orchestrator | Thursday 18 September 2025 10:51:42 +0000 (0:01:04.995) 0:01:15.389 **** 2025-09-18 10:52:48.541092 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:52:48.541104 | orchestrator | 2025-09-18 10:52:48.541115 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 10:52:48.541126 | orchestrator | 2025-09-18 10:52:48.541137 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 10:52:48.541148 | orchestrator | Thursday 18 September 2025 10:51:42 +0000 (0:00:00.116) 0:01:15.506 **** 2025-09-18 10:52:48.541159 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.541170 | orchestrator | 2025-09-18 10:52:48.541180 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 10:52:48.541191 | orchestrator | 2025-09-18 10:52:48.541202 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 10:52:48.541213 | orchestrator | Thursday 18 September 2025 10:51:53 +0000 (0:00:11.522) 0:01:27.028 **** 2025-09-18 10:52:48.541224 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:52:48.541235 | orchestrator | 2025-09-18 10:52:48.541254 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-09-18 10:52:48.541265 | orchestrator | 2025-09-18 10:52:48.541276 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-09-18 10:52:48.541287 | orchestrator | Thursday 18 September 2025 10:52:05 +0000 (0:00:11.372) 0:01:38.400 **** 2025-09-18 10:52:48.541298 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:52:48.541309 | orchestrator | 2025-09-18 10:52:48.541319 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:52:48.541349 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-09-18 10:52:48.541361 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.541373 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.541385 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:52:48.541395 | orchestrator | 2025-09-18 10:52:48.541406 | orchestrator | 2025-09-18 10:52:48.541417 | orchestrator | 2025-09-18 10:52:48.541428 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:52:48.541439 | orchestrator | Thursday 18 September 2025 10:52:06 +0000 (0:00:01.283) 0:01:39.684 **** 2025-09-18 10:52:48.541450 | orchestrator | =============================================================================== 2025-09-18 10:52:48.541461 | orchestrator | Create admin user ------------------------------------------------------ 65.00s 2025-09-18 10:52:48.541471 | orchestrator | Restart ceph manager service ------------------------------------------- 24.18s 2025-09-18 10:52:48.541482 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 2.25s 2025-09-18 10:52:48.541493 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 1.66s 2025-09-18 10:52:48.541504 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.19s 2025-09-18 10:52:48.541515 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.18s 2025-09-18 10:52:48.541525 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 1.04s 2025-09-18 10:52:48.541536 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 0.95s 2025-09-18 10:52:48.541547 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.93s 2025-09-18 10:52:48.541558 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.93s 2025-09-18 10:52:48.541569 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.12s 2025-09-18 10:52:48.541580 | orchestrator | 2025-09-18 10:52:48.541590 | orchestrator | 2025-09-18 10:52:48.541601 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:52:48.541612 | orchestrator | 2025-09-18 10:52:48.541623 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:52:48.541633 | orchestrator | Thursday 18 September 2025 10:50:33 +0000 (0:00:00.248) 0:00:00.248 **** 2025-09-18 10:52:48.541644 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:52:48.541655 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:52:48.541666 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:52:48.541677 | orchestrator | 2025-09-18 10:52:48.541688 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:52:48.541699 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:00.353) 0:00:00.602 **** 2025-09-18 10:52:48.541710 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-09-18 10:52:48.541721 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-09-18 10:52:48.541732 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-09-18 10:52:48.541743 | orchestrator | 2025-09-18 10:52:48.541754 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-09-18 10:52:48.541772 | orchestrator | 2025-09-18 10:52:48.541783 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 10:52:48.541794 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:00.621) 0:00:01.223 **** 2025-09-18 10:52:48.541805 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:52:48.541816 | orchestrator | 2025-09-18 10:52:48.541827 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-09-18 10:52:48.541842 | orchestrator | Thursday 18 September 2025 10:50:35 +0000 (0:00:00.688) 0:00:01.912 **** 2025-09-18 10:52:48.541854 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-09-18 10:52:48.541865 | orchestrator | 2025-09-18 10:52:48.541876 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-09-18 10:52:48.541893 | orchestrator | Thursday 18 September 2025 10:50:40 +0000 (0:00:04.668) 0:00:06.581 **** 2025-09-18 10:52:48.541904 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-09-18 10:52:48.541916 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-09-18 10:52:48.541927 | orchestrator | 2025-09-18 10:52:48.541938 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-09-18 10:52:48.541949 | orchestrator | Thursday 18 September 2025 10:50:47 +0000 (0:00:06.932) 0:00:13.513 **** 2025-09-18 10:52:48.541960 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-09-18 10:52:48.541971 | orchestrator | 2025-09-18 10:52:48.541981 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-09-18 10:52:48.541992 | orchestrator | Thursday 18 September 2025 10:50:50 +0000 (0:00:03.827) 0:00:17.341 **** 2025-09-18 10:52:48.542003 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:52:48.542014 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-09-18 10:52:48.542069 | orchestrator | 2025-09-18 10:52:48.542081 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-09-18 10:52:48.542092 | orchestrator | Thursday 18 September 2025 10:50:55 +0000 (0:00:04.343) 0:00:21.685 **** 2025-09-18 10:52:48.542103 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:52:48.542114 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-09-18 10:52:48.542125 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-09-18 10:52:48.542136 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-09-18 10:52:48.542147 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-09-18 10:52:48.542158 | orchestrator | 2025-09-18 10:52:48.542169 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-09-18 10:52:48.542180 | orchestrator | Thursday 18 September 2025 10:51:13 +0000 (0:00:17.895) 0:00:39.580 **** 2025-09-18 10:52:48.542191 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-09-18 10:52:48.542202 | orchestrator | 2025-09-18 10:52:48.542213 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-09-18 10:52:48.542224 | orchestrator | Thursday 18 September 2025 10:51:17 +0000 (0:00:04.037) 0:00:43.618 **** 2025-09-18 10:52:48.542238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542286 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542299 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542311 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542369 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542399 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542410 | orchestrator | 2025-09-18 10:52:48.542422 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-09-18 10:52:48.542439 | orchestrator | Thursday 18 September 2025 10:51:19 +0000 (0:00:02.104) 0:00:45.723 **** 2025-09-18 10:52:48.542451 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-09-18 10:52:48.542462 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-09-18 10:52:48.542473 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-09-18 10:52:48.542484 | orchestrator | 2025-09-18 10:52:48.542495 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-09-18 10:52:48.542506 | orchestrator | Thursday 18 September 2025 10:51:21 +0000 (0:00:01.714) 0:00:47.437 **** 2025-09-18 10:52:48.542517 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.542528 | orchestrator | 2025-09-18 10:52:48.542539 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-09-18 10:52:48.542550 | orchestrator | Thursday 18 September 2025 10:51:21 +0000 (0:00:00.384) 0:00:47.821 **** 2025-09-18 10:52:48.542561 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.542572 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.542583 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.542594 | orchestrator | 2025-09-18 10:52:48.542605 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 10:52:48.542616 | orchestrator | Thursday 18 September 2025 10:51:22 +0000 (0:00:00.980) 0:00:48.801 **** 2025-09-18 10:52:48.542627 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:52:48.542638 | orchestrator | 2025-09-18 10:52:48.542649 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-09-18 10:52:48.542660 | orchestrator | Thursday 18 September 2025 10:51:23 +0000 (0:00:01.001) 0:00:49.803 **** 2025-09-18 10:52:48.542672 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542707 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.542726 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542738 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542774 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542797 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.542809 | orchestrator | 2025-09-18 10:52:48.542820 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-09-18 10:52:48.542831 | orchestrator | Thursday 18 September 2025 10:51:28 +0000 (0:00:04.615) 0:00:54.419 **** 2025-09-18 10:52:48.542853 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.542866 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.542883 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.542895 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.542907 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.542919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.542934 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.542946 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.542964 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.542984 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.542996 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543008 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.543019 | orchestrator | 2025-09-18 10:52:48.543030 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-09-18 10:52:48.543041 | orchestrator | Thursday 18 September 2025 10:51:29 +0000 (0:00:01.412) 0:00:55.831 **** 2025-09-18 10:52:48.543052 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543087 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543099 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.543110 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543150 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.543162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543214 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.543225 | orchestrator | 2025-09-18 10:52:48.543236 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-09-18 10:52:48.543247 | orchestrator | Thursday 18 September 2025 10:51:30 +0000 (0:00:01.520) 0:00:57.351 **** 2025-09-18 10:52:48.543259 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543271 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543282 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543306 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543416 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543427 | orchestrator | 2025-09-18 10:52:48.543438 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-09-18 10:52:48.543449 | orchestrator | Thursday 18 September 2025 10:51:35 +0000 (0:00:04.083) 0:01:01.434 **** 2025-09-18 10:52:48.543460 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.543472 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:52:48.543482 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:52:48.543493 | orchestrator | 2025-09-18 10:52:48.543504 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-09-18 10:52:48.543515 | orchestrator | Thursday 18 September 2025 10:51:37 +0000 (0:00:02.935) 0:01:04.370 **** 2025-09-18 10:52:48.543533 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:52:48.543544 | orchestrator | 2025-09-18 10:52:48.543555 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-09-18 10:52:48.543570 | orchestrator | Thursday 18 September 2025 10:51:39 +0000 (0:00:01.256) 0:01:05.627 **** 2025-09-18 10:52:48.543582 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.543593 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.543604 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.543615 | orchestrator | 2025-09-18 10:52:48.543630 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-09-18 10:52:48.543640 | orchestrator | Thursday 18 September 2025 10:51:39 +0000 (0:00:00.596) 0:01:06.224 **** 2025-09-18 10:52:48.543651 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543661 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543707 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543728 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543738 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543749 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.543759 | orchestrator | 2025-09-18 10:52:48.543769 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-09-18 10:52:48.543779 | orchestrator | Thursday 18 September 2025 10:51:51 +0000 (0:00:12.052) 0:01:18.276 **** 2025-09-18 10:52:48.543789 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543818 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543839 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.543849 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543860 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543870 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543885 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.543899 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-09-18 10:52:48.543915 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543926 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:52:48.543936 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.543946 | orchestrator | 2025-09-18 10:52:48.543956 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-09-18 10:52:48.543966 | orchestrator | Thursday 18 September 2025 10:51:52 +0000 (0:00:00.878) 0:01:19.154 **** 2025-09-18 10:52:48.543976 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.543987 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.544006 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-api:2024.2', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-09-18 10:52:48.544022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544043 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-keystone-listener:2024.2', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544053 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/barbican-worker:2024.2', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:52:48.544090 | orchestrator | 2025-09-18 10:52:48.544099 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-09-18 10:52:48.544109 | orchestrator | Thursday 18 September 2025 10:51:57 +0000 (0:00:05.237) 0:01:24.392 **** 2025-09-18 10:52:48.544119 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:52:48.544133 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:52:48.544143 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:52:48.544153 | orchestrator | 2025-09-18 10:52:48.544163 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-09-18 10:52:48.544178 | orchestrator | Thursday 18 September 2025 10:51:58 +0000 (0:00:00.424) 0:01:24.816 **** 2025-09-18 10:52:48.544188 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544198 | orchestrator | 2025-09-18 10:52:48.544207 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-09-18 10:52:48.544217 | orchestrator | Thursday 18 September 2025 10:52:00 +0000 (0:00:02.535) 0:01:27.351 **** 2025-09-18 10:52:48.544227 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544236 | orchestrator | 2025-09-18 10:52:48.544246 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-09-18 10:52:48.544256 | orchestrator | Thursday 18 September 2025 10:52:03 +0000 (0:00:02.590) 0:01:29.942 **** 2025-09-18 10:52:48.544265 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544275 | orchestrator | 2025-09-18 10:52:48.544285 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 10:52:48.544295 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:11.877) 0:01:41.820 **** 2025-09-18 10:52:48.544305 | orchestrator | 2025-09-18 10:52:48.544315 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 10:52:48.544324 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.168) 0:01:41.988 **** 2025-09-18 10:52:48.544348 | orchestrator | 2025-09-18 10:52:48.544358 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-09-18 10:52:48.544368 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.127) 0:01:42.116 **** 2025-09-18 10:52:48.544377 | orchestrator | 2025-09-18 10:52:48.544387 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-09-18 10:52:48.544397 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.129) 0:01:42.245 **** 2025-09-18 10:52:48.544406 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544416 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:52:48.544426 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:52:48.544435 | orchestrator | 2025-09-18 10:52:48.544445 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-09-18 10:52:48.544455 | orchestrator | Thursday 18 September 2025 10:52:23 +0000 (0:00:07.738) 0:01:49.984 **** 2025-09-18 10:52:48.544470 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544480 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:52:48.544489 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:52:48.544499 | orchestrator | 2025-09-18 10:52:48.544508 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-09-18 10:52:48.544518 | orchestrator | Thursday 18 September 2025 10:52:34 +0000 (0:00:10.714) 0:02:00.698 **** 2025-09-18 10:52:48.544528 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:52:48.544538 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:52:48.544547 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:52:48.544557 | orchestrator | 2025-09-18 10:52:48.544567 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:52:48.544577 | orchestrator | testbed-node-0 : ok=24  changed=19  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 10:52:48.544587 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:52:48.544597 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:52:48.544607 | orchestrator | 2025-09-18 10:52:48.544616 | orchestrator | 2025-09-18 10:52:48.544626 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:52:48.544635 | orchestrator | Thursday 18 September 2025 10:52:47 +0000 (0:00:13.292) 0:02:13.991 **** 2025-09-18 10:52:48.544645 | orchestrator | =============================================================================== 2025-09-18 10:52:48.544655 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 17.90s 2025-09-18 10:52:48.544665 | orchestrator | barbican : Restart barbican-worker container --------------------------- 13.29s 2025-09-18 10:52:48.544674 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.05s 2025-09-18 10:52:48.544684 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 11.88s 2025-09-18 10:52:48.544693 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 10.71s 2025-09-18 10:52:48.544703 | orchestrator | barbican : Restart barbican-api container ------------------------------- 7.74s 2025-09-18 10:52:48.544713 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.93s 2025-09-18 10:52:48.544722 | orchestrator | barbican : Check barbican containers ------------------------------------ 5.24s 2025-09-18 10:52:48.544732 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 4.67s 2025-09-18 10:52:48.544741 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.62s 2025-09-18 10:52:48.544751 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 4.34s 2025-09-18 10:52:48.544761 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.08s 2025-09-18 10:52:48.544770 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.04s 2025-09-18 10:52:48.544780 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.83s 2025-09-18 10:52:48.544790 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 2.94s 2025-09-18 10:52:48.544799 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.59s 2025-09-18 10:52:48.544813 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.54s 2025-09-18 10:52:48.544823 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.10s 2025-09-18 10:52:48.544833 | orchestrator | barbican : Ensuring vassals config directories exist -------------------- 1.71s 2025-09-18 10:52:48.544847 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 1.52s 2025-09-18 10:52:48.544857 | orchestrator | 2025-09-18 10:52:48 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:48.544873 | orchestrator | 2025-09-18 10:52:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:51.569018 | orchestrator | 2025-09-18 10:52:51 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:51.569393 | orchestrator | 2025-09-18 10:52:51 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:51.570095 | orchestrator | 2025-09-18 10:52:51 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:52:51.570816 | orchestrator | 2025-09-18 10:52:51 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:51.570836 | orchestrator | 2025-09-18 10:52:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:54.595799 | orchestrator | 2025-09-18 10:52:54 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:54.596065 | orchestrator | 2025-09-18 10:52:54 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:54.596913 | orchestrator | 2025-09-18 10:52:54 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:52:54.597591 | orchestrator | 2025-09-18 10:52:54 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:54.597708 | orchestrator | 2025-09-18 10:52:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:52:57.622443 | orchestrator | 2025-09-18 10:52:57 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:52:57.622749 | orchestrator | 2025-09-18 10:52:57 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:52:57.623766 | orchestrator | 2025-09-18 10:52:57 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:52:57.625129 | orchestrator | 2025-09-18 10:52:57 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:52:57.625152 | orchestrator | 2025-09-18 10:52:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:00.660037 | orchestrator | 2025-09-18 10:53:00 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:00.660158 | orchestrator | 2025-09-18 10:53:00 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:00.662718 | orchestrator | 2025-09-18 10:53:00 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:00.662757 | orchestrator | 2025-09-18 10:53:00 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:00.662770 | orchestrator | 2025-09-18 10:53:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:03.689603 | orchestrator | 2025-09-18 10:53:03 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:03.690241 | orchestrator | 2025-09-18 10:53:03 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:03.691110 | orchestrator | 2025-09-18 10:53:03 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:03.692584 | orchestrator | 2025-09-18 10:53:03 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:03.692610 | orchestrator | 2025-09-18 10:53:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:06.718455 | orchestrator | 2025-09-18 10:53:06 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:06.719632 | orchestrator | 2025-09-18 10:53:06 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:06.721943 | orchestrator | 2025-09-18 10:53:06 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:06.723819 | orchestrator | 2025-09-18 10:53:06 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:06.724051 | orchestrator | 2025-09-18 10:53:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:09.767355 | orchestrator | 2025-09-18 10:53:09 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:09.770375 | orchestrator | 2025-09-18 10:53:09 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:09.772869 | orchestrator | 2025-09-18 10:53:09 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:09.774889 | orchestrator | 2025-09-18 10:53:09 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:09.774993 | orchestrator | 2025-09-18 10:53:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:12.823198 | orchestrator | 2025-09-18 10:53:12 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:12.825273 | orchestrator | 2025-09-18 10:53:12 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:12.829333 | orchestrator | 2025-09-18 10:53:12 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:12.831528 | orchestrator | 2025-09-18 10:53:12 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:12.831992 | orchestrator | 2025-09-18 10:53:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:15.867969 | orchestrator | 2025-09-18 10:53:15 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:15.869509 | orchestrator | 2025-09-18 10:53:15 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:15.870698 | orchestrator | 2025-09-18 10:53:15 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:15.871156 | orchestrator | 2025-09-18 10:53:15 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:15.871176 | orchestrator | 2025-09-18 10:53:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:18.907592 | orchestrator | 2025-09-18 10:53:18 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:18.910353 | orchestrator | 2025-09-18 10:53:18 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:18.911058 | orchestrator | 2025-09-18 10:53:18 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:18.913422 | orchestrator | 2025-09-18 10:53:18 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:18.913485 | orchestrator | 2025-09-18 10:53:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:21.936438 | orchestrator | 2025-09-18 10:53:21 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:21.937083 | orchestrator | 2025-09-18 10:53:21 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:21.938509 | orchestrator | 2025-09-18 10:53:21 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:21.941033 | orchestrator | 2025-09-18 10:53:21 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:21.941057 | orchestrator | 2025-09-18 10:53:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:24.984127 | orchestrator | 2025-09-18 10:53:24 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:24.985579 | orchestrator | 2025-09-18 10:53:24 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:24.987358 | orchestrator | 2025-09-18 10:53:24 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:24.988912 | orchestrator | 2025-09-18 10:53:24 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:24.988936 | orchestrator | 2025-09-18 10:53:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:28.042386 | orchestrator | 2025-09-18 10:53:28 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:28.043941 | orchestrator | 2025-09-18 10:53:28 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:28.045501 | orchestrator | 2025-09-18 10:53:28 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:28.047373 | orchestrator | 2025-09-18 10:53:28 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:28.047399 | orchestrator | 2025-09-18 10:53:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:31.081717 | orchestrator | 2025-09-18 10:53:31 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:31.082125 | orchestrator | 2025-09-18 10:53:31 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:31.083017 | orchestrator | 2025-09-18 10:53:31 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:31.083865 | orchestrator | 2025-09-18 10:53:31 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:31.083888 | orchestrator | 2025-09-18 10:53:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:34.172136 | orchestrator | 2025-09-18 10:53:34 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:34.174095 | orchestrator | 2025-09-18 10:53:34 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:34.176089 | orchestrator | 2025-09-18 10:53:34 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:34.178337 | orchestrator | 2025-09-18 10:53:34 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:34.178727 | orchestrator | 2025-09-18 10:53:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:37.226454 | orchestrator | 2025-09-18 10:53:37 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:37.226793 | orchestrator | 2025-09-18 10:53:37 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:37.228265 | orchestrator | 2025-09-18 10:53:37 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:37.230585 | orchestrator | 2025-09-18 10:53:37 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:37.230614 | orchestrator | 2025-09-18 10:53:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:40.283456 | orchestrator | 2025-09-18 10:53:40 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:40.284229 | orchestrator | 2025-09-18 10:53:40 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:40.285398 | orchestrator | 2025-09-18 10:53:40 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:40.286914 | orchestrator | 2025-09-18 10:53:40 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:40.286954 | orchestrator | 2025-09-18 10:53:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:43.328084 | orchestrator | 2025-09-18 10:53:43 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:43.330624 | orchestrator | 2025-09-18 10:53:43 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:43.333879 | orchestrator | 2025-09-18 10:53:43 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:43.333904 | orchestrator | 2025-09-18 10:53:43 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:43.333917 | orchestrator | 2025-09-18 10:53:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:46.386678 | orchestrator | 2025-09-18 10:53:46 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:46.388677 | orchestrator | 2025-09-18 10:53:46 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:46.390674 | orchestrator | 2025-09-18 10:53:46 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:46.392209 | orchestrator | 2025-09-18 10:53:46 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:46.392237 | orchestrator | 2025-09-18 10:53:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:49.428388 | orchestrator | 2025-09-18 10:53:49 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:49.428497 | orchestrator | 2025-09-18 10:53:49 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:49.430511 | orchestrator | 2025-09-18 10:53:49 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:49.431452 | orchestrator | 2025-09-18 10:53:49 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:49.431477 | orchestrator | 2025-09-18 10:53:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:52.463488 | orchestrator | 2025-09-18 10:53:52 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state STARTED 2025-09-18 10:53:52.466563 | orchestrator | 2025-09-18 10:53:52 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:52.469169 | orchestrator | 2025-09-18 10:53:52 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:52.470370 | orchestrator | 2025-09-18 10:53:52 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:52.470616 | orchestrator | 2025-09-18 10:53:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:55.504988 | orchestrator | 2025-09-18 10:53:55 | INFO  | Task a89b5aba-9d8f-41ee-912b-0aaeced2d165 is in state SUCCESS 2025-09-18 10:53:55.506100 | orchestrator | 2025-09-18 10:53:55.506148 | orchestrator | 2025-09-18 10:53:55.506165 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:53:55.506234 | orchestrator | 2025-09-18 10:53:55.506256 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:53:55.506486 | orchestrator | Thursday 18 September 2025 10:50:26 +0000 (0:00:00.283) 0:00:00.283 **** 2025-09-18 10:53:55.506500 | orchestrator | ok: [testbed-manager] 2025-09-18 10:53:55.506513 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:53:55.506524 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:53:55.506542 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:53:55.506554 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:53:55.506565 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:53:55.506575 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:53:55.506586 | orchestrator | 2025-09-18 10:53:55.506598 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:53:55.506634 | orchestrator | Thursday 18 September 2025 10:50:27 +0000 (0:00:01.026) 0:00:01.309 **** 2025-09-18 10:53:55.506673 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506688 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506700 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506820 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506834 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506847 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506859 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-09-18 10:53:55.506871 | orchestrator | 2025-09-18 10:53:55.506884 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-09-18 10:53:55.506896 | orchestrator | 2025-09-18 10:53:55.506909 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-18 10:53:55.506921 | orchestrator | Thursday 18 September 2025 10:50:28 +0000 (0:00:00.755) 0:00:02.065 **** 2025-09-18 10:53:55.506934 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:53:55.506963 | orchestrator | 2025-09-18 10:53:55.506986 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-09-18 10:53:55.507000 | orchestrator | Thursday 18 September 2025 10:50:30 +0000 (0:00:01.689) 0:00:03.754 **** 2025-09-18 10:53:55.507041 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:53:55.507075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507137 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507214 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507262 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507299 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507313 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507342 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507400 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507414 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:53:55.507431 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507455 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507466 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507483 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507569 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507655 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507667 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507679 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507691 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507702 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507782 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.507794 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.507805 | orchestrator | 2025-09-18 10:53:55.507817 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-09-18 10:53:55.507828 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:03.722) 0:00:07.477 **** 2025-09-18 10:53:55.507839 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:53:55.507850 | orchestrator | 2025-09-18 10:53:55.507861 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-09-18 10:53:55.507872 | orchestrator | Thursday 18 September 2025 10:50:35 +0000 (0:00:01.537) 0:00:09.014 **** 2025-09-18 10:53:55.507883 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:53:55.507895 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507906 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507930 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507962 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507986 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.507997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.508008 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508035 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508047 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508071 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508091 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508103 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508114 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508126 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:53:55.508138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508158 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508180 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508211 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508222 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.508234 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508257 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508305 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.508323 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.509338 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.509368 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.509380 | orchestrator | 2025-09-18 10:53:55.509391 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-09-18 10:53:55.509402 | orchestrator | Thursday 18 September 2025 10:50:41 +0000 (0:00:06.309) 0:00:15.324 **** 2025-09-18 10:53:55.509414 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 10:53:55.509426 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509450 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509471 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 10:53:55.509492 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509516 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509527 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509556 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509568 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.509580 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509637 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509666 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509677 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.509688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509700 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.509732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.509793 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509804 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.509816 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509827 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509838 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.509856 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509868 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509879 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509890 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.509903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.509916 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509937 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.509950 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.509962 | orchestrator | 2025-09-18 10:53:55.509974 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-09-18 10:53:55.509986 | orchestrator | Thursday 18 September 2025 10:50:43 +0000 (0:00:01.490) 0:00:16.815 **** 2025-09-18 10:53:55.510149 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-09-18 10:53:55.510188 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510201 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510214 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510312 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-09-18 10:53:55.510338 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510370 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510393 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510404 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.510416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510427 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510443 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510474 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510524 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.510535 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.510547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510558 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510570 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510627 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510640 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-09-18 10:53:55.510659 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510670 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.510681 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.510692 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510704 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510715 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510726 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.510738 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-09-18 10:53:55.510754 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510772 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-09-18 10:53:55.510790 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.510801 | orchestrator | 2025-09-18 10:53:55.510812 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-09-18 10:53:55.510824 | orchestrator | Thursday 18 September 2025 10:50:45 +0000 (0:00:02.052) 0:00:18.867 **** 2025-09-18 10:53:55.510835 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:53:55.510847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510858 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510869 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510896 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510914 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.510944 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.510956 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.510968 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.510979 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.510990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511007 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511031 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511044 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:53:55.511056 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511067 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511078 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511089 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511109 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511145 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511156 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.511190 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511202 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511223 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.511235 | orchestrator | 2025-09-18 10:53:55.511246 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-09-18 10:53:55.511257 | orchestrator | Thursday 18 September 2025 10:50:51 +0000 (0:00:06.209) 0:00:25.077 **** 2025-09-18 10:53:55.511295 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:53:55.511307 | orchestrator | 2025-09-18 10:53:55.511319 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-09-18 10:53:55.511335 | orchestrator | Thursday 18 September 2025 10:50:52 +0000 (0:00:01.048) 0:00:26.126 **** 2025-09-18 10:53:55.511347 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511360 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511372 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511383 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511395 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511414 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511435 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.511447 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511459 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511470 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511481 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1328964, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7959893, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511492 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511511 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511533 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511545 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511556 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511567 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511579 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511590 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511608 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511630 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511642 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511653 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511665 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511676 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511693 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511704 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511720 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511738 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511750 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511761 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511772 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12980, 'inode': 1328978, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8007572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.511790 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511801 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511817 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511835 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511847 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511858 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511888 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511899 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511919 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511937 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511948 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511960 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511978 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.511989 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512000 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512016 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 55956, 'inode': 1328961, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.512034 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512045 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512057 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512075 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512086 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512097 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512113 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512132 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512144 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512155 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512174 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512185 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512196 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512212 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512231 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'is2025-09-18 10:53:55 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state STARTED 2025-09-18 10:53:55.512469 | orchestrator | reg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512491 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512513 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512525 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1328971, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7986555, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.512536 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512547 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512565 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512586 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512598 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512616 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512627 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512639 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512649 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512663 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512679 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512690 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512706 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512716 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512725 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512736 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512750 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512765 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512781 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512791 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3900, 'inode': 1328957, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7931097, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.512801 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512811 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512821 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512835 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512851 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512870 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.512880 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512891 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512901 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.512911 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512921 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512931 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512945 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512960 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512977 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512987 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.512997 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513007 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513017 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513032 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513053 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513064 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513074 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513084 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.513095 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513105 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.513116 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513127 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.513137 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513153 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513176 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1328965, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7963617, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513187 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-09-18 10:53:55.513199 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.513210 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 13522, 'inode': 1328969, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.798272, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513221 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5593, 'inode': 1328966, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7969632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513232 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1328963, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7947571, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513242 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328977, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513283 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328955, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7917569, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513302 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1328988, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.804757, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513314 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus-extra.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7408, 'inode': 1328973, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7998915, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513325 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3, 'inode': 1328959, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.793538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513337 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/alertmanager.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5051, 'inode': 1328956, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7927911, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513347 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/node.rec.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2309, 'inode': 1328968, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7980194, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513359 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1328967, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.7975852, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513380 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1328987, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.8037572, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-09-18 10:53:55.513391 | orchestrator | 2025-09-18 10:53:55.513402 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-09-18 10:53:55.513414 | orchestrator | Thursday 18 September 2025 10:51:18 +0000 (0:00:25.904) 0:00:52.030 **** 2025-09-18 10:53:55.513424 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:53:55.513435 | orchestrator | 2025-09-18 10:53:55.513451 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-09-18 10:53:55.513461 | orchestrator | Thursday 18 September 2025 10:51:20 +0000 (0:00:01.552) 0:00:53.582 **** 2025-09-18 10:53:55.513472 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513482 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513491 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513501 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513511 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513521 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513530 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513540 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513550 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513559 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513569 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513579 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513589 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513598 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513608 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513617 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513627 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513637 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513646 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513656 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513666 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513675 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513685 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513695 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513704 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513714 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513723 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513733 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513742 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513758 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513767 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.513777 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513787 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-09-18 10:53:55.513796 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-09-18 10:53:55.513806 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-09-18 10:53:55.513816 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:53:55.513825 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:53:55.513835 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-09-18 10:53:55.513845 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 10:53:55.513854 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 10:53:55.513864 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-09-18 10:53:55.513873 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 10:53:55.513883 | orchestrator | 2025-09-18 10:53:55.513892 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-09-18 10:53:55.513902 | orchestrator | Thursday 18 September 2025 10:51:23 +0000 (0:00:02.805) 0:00:56.388 **** 2025-09-18 10:53:55.513912 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.513922 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.513932 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.513942 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.513952 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.513962 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.513971 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.513981 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.513995 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.514005 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514015 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-09-18 10:53:55.514080 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514090 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-09-18 10:53:55.514100 | orchestrator | 2025-09-18 10:53:55.514109 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-09-18 10:53:55.514119 | orchestrator | Thursday 18 September 2025 10:51:47 +0000 (0:00:24.569) 0:01:20.958 **** 2025-09-18 10:53:55.514134 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514144 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514154 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.514164 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514173 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.514183 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.514193 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514202 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514212 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514221 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514231 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-09-18 10:53:55.514241 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.514258 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-09-18 10:53:55.514400 | orchestrator | 2025-09-18 10:53:55.514427 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-09-18 10:53:55.514437 | orchestrator | Thursday 18 September 2025 10:51:52 +0000 (0:00:04.688) 0:01:25.646 **** 2025-09-18 10:53:55.514447 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514457 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.514467 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514477 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514486 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.514496 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.514506 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514515 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514525 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514535 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.514543 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-09-18 10:53:55.514551 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514559 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-09-18 10:53:55.514567 | orchestrator | 2025-09-18 10:53:55.514575 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-09-18 10:53:55.514583 | orchestrator | Thursday 18 September 2025 10:51:56 +0000 (0:00:03.752) 0:01:29.399 **** 2025-09-18 10:53:55.514591 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:53:55.514599 | orchestrator | 2025-09-18 10:53:55.514607 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-09-18 10:53:55.514615 | orchestrator | Thursday 18 September 2025 10:51:57 +0000 (0:00:01.008) 0:01:30.408 **** 2025-09-18 10:53:55.514622 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.514630 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.514638 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.514646 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.514654 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514661 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514669 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.514677 | orchestrator | 2025-09-18 10:53:55.514685 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-09-18 10:53:55.514692 | orchestrator | Thursday 18 September 2025 10:51:58 +0000 (0:00:00.944) 0:01:31.353 **** 2025-09-18 10:53:55.514700 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.514708 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514716 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514724 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.514731 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.514739 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.514747 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.514754 | orchestrator | 2025-09-18 10:53:55.514762 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-09-18 10:53:55.514770 | orchestrator | Thursday 18 September 2025 10:52:00 +0000 (0:00:02.660) 0:01:34.013 **** 2025-09-18 10:53:55.514785 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514805 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514813 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514821 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.514829 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.514837 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.514844 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514852 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.514868 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514876 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514884 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514892 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.514900 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-09-18 10:53:55.514907 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.514915 | orchestrator | 2025-09-18 10:53:55.514923 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-09-18 10:53:55.514931 | orchestrator | Thursday 18 September 2025 10:52:03 +0000 (0:00:03.004) 0:01:37.018 **** 2025-09-18 10:53:55.514939 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.514947 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.514955 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.514963 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.514971 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.514979 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.514987 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.514995 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.515002 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-09-18 10:53:55.515010 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.515018 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.515026 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-09-18 10:53:55.515034 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.515042 | orchestrator | 2025-09-18 10:53:55.515049 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-09-18 10:53:55.515057 | orchestrator | Thursday 18 September 2025 10:52:05 +0000 (0:00:01.683) 0:01:38.701 **** 2025-09-18 10:53:55.515065 | orchestrator | [WARNING]: Skipped 2025-09-18 10:53:55.515073 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-09-18 10:53:55.515081 | orchestrator | due to this access issue: 2025-09-18 10:53:55.515089 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-09-18 10:53:55.515097 | orchestrator | not a directory 2025-09-18 10:53:55.515105 | orchestrator | ok: [testbed-manager -> localhost] 2025-09-18 10:53:55.515113 | orchestrator | 2025-09-18 10:53:55.515121 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-09-18 10:53:55.515128 | orchestrator | Thursday 18 September 2025 10:52:07 +0000 (0:00:02.091) 0:01:40.792 **** 2025-09-18 10:53:55.515136 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.515144 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.515157 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.515165 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.515173 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.515180 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.515188 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.515196 | orchestrator | 2025-09-18 10:53:55.515203 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-09-18 10:53:55.515211 | orchestrator | Thursday 18 September 2025 10:52:08 +0000 (0:00:00.861) 0:01:41.654 **** 2025-09-18 10:53:55.515219 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.515227 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:55.515235 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:55.515242 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:55.515250 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:53:55.515258 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:53:55.515286 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:53:55.515294 | orchestrator | 2025-09-18 10:53:55.515302 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-09-18 10:53:55.515310 | orchestrator | Thursday 18 September 2025 10:52:08 +0000 (0:00:00.511) 0:01:42.165 **** 2025-09-18 10:53:55.515324 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-v2-server:2024.2', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-09-18 10:53:55.515339 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515348 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515357 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515365 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515380 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515388 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515409 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515423 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515432 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-mysqld-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515463 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-alertmanager:2024.2', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-09-18 10:53:55.515473 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515485 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-node-exporter:2024.2', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-09-18 10:53:55.515498 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515506 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515515 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-memcached-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515523 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-blackbox-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515537 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515554 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515566 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515578 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-cadvisor:2024.2', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515587 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515595 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-libvirt-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-09-18 10:53:55.515608 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/prometheus-elasticsearch-exporter:2024.2', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-09-18 10:53:55.515625 | orchestrator | 2025-09-18 10:53:55.515633 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-09-18 10:53:55.515641 | orchestrator | Thursday 18 September 2025 10:52:14 +0000 (0:00:05.309) 0:01:47.475 **** 2025-09-18 10:53:55.515649 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-09-18 10:53:55.515657 | orchestrator | skipping: [testbed-manager] 2025-09-18 10:53:55.515665 | orchestrator | 2025-09-18 10:53:55.515673 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515680 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:01.662) 0:01:49.137 **** 2025-09-18 10:53:55.515688 | orchestrator | 2025-09-18 10:53:55.515696 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515704 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.065) 0:01:49.202 **** 2025-09-18 10:53:55.515712 | orchestrator | 2025-09-18 10:53:55.515720 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515727 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.063) 0:01:49.266 **** 2025-09-18 10:53:55.515735 | orchestrator | 2025-09-18 10:53:55.515743 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515751 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:00.060) 0:01:49.326 **** 2025-09-18 10:53:55.515759 | orchestrator | 2025-09-18 10:53:55.515766 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515774 | orchestrator | Thursday 18 September 2025 10:52:16 +0000 (0:00:00.325) 0:01:49.652 **** 2025-09-18 10:53:55.515782 | orchestrator | 2025-09-18 10:53:55.515790 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515802 | orchestrator | Thursday 18 September 2025 10:52:16 +0000 (0:00:00.060) 0:01:49.712 **** 2025-09-18 10:53:55.515810 | orchestrator | 2025-09-18 10:53:55.515818 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-09-18 10:53:55.515825 | orchestrator | Thursday 18 September 2025 10:52:16 +0000 (0:00:00.060) 0:01:49.773 **** 2025-09-18 10:53:55.515833 | orchestrator | 2025-09-18 10:53:55.515841 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-09-18 10:53:55.515849 | orchestrator | Thursday 18 September 2025 10:52:16 +0000 (0:00:00.080) 0:01:49.853 **** 2025-09-18 10:53:55.515857 | orchestrator | changed: [testbed-manager] 2025-09-18 10:53:55.515865 | orchestrator | 2025-09-18 10:53:55.515873 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-09-18 10:53:55.515885 | orchestrator | Thursday 18 September 2025 10:52:35 +0000 (0:00:18.598) 0:02:08.451 **** 2025-09-18 10:53:55.515901 | orchestrator | changed: [testbed-manager] 2025-09-18 10:53:55.515908 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:53:55.515916 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:53:55.515924 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.515932 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.515940 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.515948 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:53:55.515956 | orchestrator | 2025-09-18 10:53:55.515963 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-09-18 10:53:55.515971 | orchestrator | Thursday 18 September 2025 10:52:50 +0000 (0:00:14.900) 0:02:23.352 **** 2025-09-18 10:53:55.515979 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.515987 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.515995 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.516003 | orchestrator | 2025-09-18 10:53:55.516011 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-09-18 10:53:55.516018 | orchestrator | Thursday 18 September 2025 10:53:02 +0000 (0:00:12.393) 0:02:35.745 **** 2025-09-18 10:53:55.516026 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.516034 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.516042 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.516050 | orchestrator | 2025-09-18 10:53:55.516058 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-09-18 10:53:55.516065 | orchestrator | Thursday 18 September 2025 10:53:14 +0000 (0:00:12.314) 0:02:48.060 **** 2025-09-18 10:53:55.516073 | orchestrator | changed: [testbed-manager] 2025-09-18 10:53:55.516081 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.516089 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:53:55.516097 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:53:55.516105 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.516112 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:53:55.516120 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.516128 | orchestrator | 2025-09-18 10:53:55.516136 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-09-18 10:53:55.516144 | orchestrator | Thursday 18 September 2025 10:53:29 +0000 (0:00:15.065) 0:03:03.125 **** 2025-09-18 10:53:55.516152 | orchestrator | changed: [testbed-manager] 2025-09-18 10:53:55.516160 | orchestrator | 2025-09-18 10:53:55.516167 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-09-18 10:53:55.516175 | orchestrator | Thursday 18 September 2025 10:53:37 +0000 (0:00:08.129) 0:03:11.255 **** 2025-09-18 10:53:55.516183 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:55.516191 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:55.516199 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:55.516207 | orchestrator | 2025-09-18 10:53:55.516214 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-09-18 10:53:55.516222 | orchestrator | Thursday 18 September 2025 10:53:43 +0000 (0:00:05.461) 0:03:16.716 **** 2025-09-18 10:53:55.516230 | orchestrator | changed: [testbed-manager] 2025-09-18 10:53:55.516238 | orchestrator | 2025-09-18 10:53:55.516246 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-09-18 10:53:55.516253 | orchestrator | Thursday 18 September 2025 10:53:48 +0000 (0:00:05.198) 0:03:21.915 **** 2025-09-18 10:53:55.516261 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:53:55.516281 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:53:55.516289 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:53:55.516297 | orchestrator | 2025-09-18 10:53:55.516305 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:53:55.516313 | orchestrator | testbed-manager : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 10:53:55.516321 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:53:55.516334 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:53:55.516342 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:53:55.516350 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 10:53:55.516358 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 10:53:55.516366 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 10:53:55.516374 | orchestrator | 2025-09-18 10:53:55.516382 | orchestrator | 2025-09-18 10:53:55.516390 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:53:55.516401 | orchestrator | Thursday 18 September 2025 10:53:54 +0000 (0:00:05.468) 0:03:27.383 **** 2025-09-18 10:53:55.516409 | orchestrator | =============================================================================== 2025-09-18 10:53:55.516417 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 25.90s 2025-09-18 10:53:55.516425 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 24.57s 2025-09-18 10:53:55.516433 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 18.60s 2025-09-18 10:53:55.516441 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 15.07s 2025-09-18 10:53:55.516449 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 14.90s 2025-09-18 10:53:55.516461 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.39s 2025-09-18 10:53:55.516469 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 12.31s 2025-09-18 10:53:55.516477 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 8.13s 2025-09-18 10:53:55.516485 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.31s 2025-09-18 10:53:55.516493 | orchestrator | prometheus : Copying over config.json files ----------------------------- 6.21s 2025-09-18 10:53:55.516500 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container -------------- 5.47s 2025-09-18 10:53:55.516508 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container -------- 5.46s 2025-09-18 10:53:55.516516 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.31s 2025-09-18 10:53:55.516524 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 5.20s 2025-09-18 10:53:55.516532 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.69s 2025-09-18 10:53:55.516540 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 3.75s 2025-09-18 10:53:55.516547 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 3.72s 2025-09-18 10:53:55.516555 | orchestrator | prometheus : Copying cloud config file for openstack exporter ----------- 3.00s 2025-09-18 10:53:55.516563 | orchestrator | prometheus : Find prometheus host config overrides ---------------------- 2.81s 2025-09-18 10:53:55.516571 | orchestrator | prometheus : Copying over my.cnf for mysqld_exporter -------------------- 2.66s 2025-09-18 10:53:55.516579 | orchestrator | 2025-09-18 10:53:55 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:55.516587 | orchestrator | 2025-09-18 10:53:55 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:55.516595 | orchestrator | 2025-09-18 10:53:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:53:58.540703 | orchestrator | 2025-09-18 10:53:58 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:53:58.542873 | orchestrator | 2025-09-18 10:53:58 | INFO  | Task a60117b5-5b48-4202-aebe-383755aeb43c is in state SUCCESS 2025-09-18 10:53:58.542904 | orchestrator | 2025-09-18 10:53:58.544685 | orchestrator | 2025-09-18 10:53:58.544724 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:53:58.544737 | orchestrator | 2025-09-18 10:53:58.544748 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:53:58.544760 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-18 10:53:58.544771 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:53:58.544783 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:53:58.544795 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:53:58.544806 | orchestrator | 2025-09-18 10:53:58.544817 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:53:58.544829 | orchestrator | Thursday 18 September 2025 10:50:35 +0000 (0:00:00.399) 0:00:00.676 **** 2025-09-18 10:53:58.544841 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-09-18 10:53:58.544852 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-09-18 10:53:58.544864 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-09-18 10:53:58.544875 | orchestrator | 2025-09-18 10:53:58.544886 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-09-18 10:53:58.544897 | orchestrator | 2025-09-18 10:53:58.544908 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 10:53:58.544919 | orchestrator | Thursday 18 September 2025 10:50:35 +0000 (0:00:00.498) 0:00:01.174 **** 2025-09-18 10:53:58.544930 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:53:58.544942 | orchestrator | 2025-09-18 10:53:58.544953 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-09-18 10:53:58.544964 | orchestrator | Thursday 18 September 2025 10:50:36 +0000 (0:00:00.750) 0:00:01.925 **** 2025-09-18 10:53:58.544975 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-09-18 10:53:58.544986 | orchestrator | 2025-09-18 10:53:58.544997 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-09-18 10:53:58.545016 | orchestrator | Thursday 18 September 2025 10:50:40 +0000 (0:00:04.303) 0:00:06.229 **** 2025-09-18 10:53:58.545027 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-09-18 10:53:58.545039 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-09-18 10:53:58.545050 | orchestrator | 2025-09-18 10:53:58.545061 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-09-18 10:53:58.545086 | orchestrator | Thursday 18 September 2025 10:50:47 +0000 (0:00:06.865) 0:00:13.095 **** 2025-09-18 10:53:58.545097 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:53:58.545108 | orchestrator | 2025-09-18 10:53:58.545119 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-09-18 10:53:58.545130 | orchestrator | Thursday 18 September 2025 10:50:51 +0000 (0:00:03.801) 0:00:16.896 **** 2025-09-18 10:53:58.545141 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:53:58.545152 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-09-18 10:53:58.545163 | orchestrator | 2025-09-18 10:53:58.545174 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-09-18 10:53:58.545185 | orchestrator | Thursday 18 September 2025 10:50:55 +0000 (0:00:04.295) 0:00:21.192 **** 2025-09-18 10:53:58.545196 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:53:58.545207 | orchestrator | 2025-09-18 10:53:58.545219 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-09-18 10:53:58.545230 | orchestrator | Thursday 18 September 2025 10:50:59 +0000 (0:00:03.523) 0:00:24.715 **** 2025-09-18 10:53:58.545254 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-09-18 10:53:58.545319 | orchestrator | 2025-09-18 10:53:58.545333 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-09-18 10:53:58.545346 | orchestrator | Thursday 18 September 2025 10:51:03 +0000 (0:00:04.672) 0:00:29.388 **** 2025-09-18 10:53:58.545360 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545391 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545438 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545459 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545471 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545492 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545518 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545555 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545568 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545581 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545601 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545615 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545628 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545663 | orchestrator | 2025-09-18 10:53:58.545675 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-09-18 10:53:58.545686 | orchestrator | Thursday 18 September 2025 10:51:07 +0000 (0:00:03.370) 0:00:32.759 **** 2025-09-18 10:53:58.545697 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.545708 | orchestrator | 2025-09-18 10:53:58.545719 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-09-18 10:53:58.545730 | orchestrator | Thursday 18 September 2025 10:51:07 +0000 (0:00:00.103) 0:00:32.862 **** 2025-09-18 10:53:58.545741 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.545753 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.545764 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.545775 | orchestrator | 2025-09-18 10:53:58.545786 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 10:53:58.545797 | orchestrator | Thursday 18 September 2025 10:51:07 +0000 (0:00:00.239) 0:00:33.102 **** 2025-09-18 10:53:58.545808 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:53:58.545819 | orchestrator | 2025-09-18 10:53:58.545829 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-09-18 10:53:58.545840 | orchestrator | Thursday 18 September 2025 10:51:08 +0000 (0:00:00.612) 0:00:33.714 **** 2025-09-18 10:53:58.545852 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545871 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545883 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.545905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545941 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545972 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.545990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546006 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546065 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546122 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546151 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546163 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.546175 | orchestrator | 2025-09-18 10:53:58.546186 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-09-18 10:53:58.546197 | orchestrator | Thursday 18 September 2025 10:51:13 +0000 (0:00:05.522) 0:00:39.236 **** 2025-09-18 10:53:58.546209 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.546221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.546239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546326 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.546339 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.546350 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.546711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546791 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546869 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.546883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.546895 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.546924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546955 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546973 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.546985 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.546996 | orchestrator | 2025-09-18 10:53:58.547009 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-09-18 10:53:58.547022 | orchestrator | Thursday 18 September 2025 10:51:14 +0000 (0:00:01.299) 0:00:40.536 **** 2025-09-18 10:53:58.547033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.547045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.547063 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547093 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547121 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.547133 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.547144 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.547162 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547180 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547207 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547219 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.547231 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.547243 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.547256 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547301 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547315 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.547340 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.547352 | orchestrator | 2025-09-18 10:53:58.547369 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-09-18 10:53:58.547382 | orchestrator | Thursday 18 September 2025 10:51:16 +0000 (0:00:01.982) 0:00:42.519 **** 2025-09-18 10:53:58.547394 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547407 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547431 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547443 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547455 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547495 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547540 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547551 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547579 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547603 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547632 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547644 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547656 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547667 | orchestrator | 2025-09-18 10:53:58.547679 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-09-18 10:53:58.547690 | orchestrator | Thursday 18 September 2025 10:51:24 +0000 (0:00:07.042) 0:00:49.561 **** 2025-09-18 10:53:58.547706 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547719 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547736 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.547754 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547794 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547805 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547823 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547840 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547880 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547892 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547939 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.547962 | orchestrator | 2025-09-18 10:53:58.547974 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-09-18 10:53:58.547985 | orchestrator | Thursday 18 September 2025 10:51:44 +0000 (0:00:20.072) 0:01:09.634 **** 2025-09-18 10:53:58.547996 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 10:53:58.548007 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 10:53:58.548019 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-09-18 10:53:58.548030 | orchestrator | 2025-09-18 10:53:58.548041 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-09-18 10:53:58.548052 | orchestrator | Thursday 18 September 2025 10:51:52 +0000 (0:00:08.373) 0:01:18.008 **** 2025-09-18 10:53:58.548063 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 10:53:58.548074 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 10:53:58.548085 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-09-18 10:53:58.548096 | orchestrator | 2025-09-18 10:53:58.548107 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-09-18 10:53:58.548123 | orchestrator | Thursday 18 September 2025 10:51:58 +0000 (0:00:05.727) 0:01:23.735 **** 2025-09-18 10:53:58.548134 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548183 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548194 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548257 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548307 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548318 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548353 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548364 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548376 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548393 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548405 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548417 | orchestrator | 2025-09-18 10:53:58.548428 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-09-18 10:53:58.548439 | orchestrator | Thursday 18 September 2025 10:52:01 +0000 (0:00:03.506) 0:01:27.242 **** 2025-09-18 10:53:58.548462 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548486 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548503 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548515 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548526 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548552 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548564 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548575 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548616 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548639 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548651 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548662 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.548715 | orchestrator | 2025-09-18 10:53:58.548726 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 10:53:58.548743 | orchestrator | Thursday 18 September 2025 10:52:04 +0000 (0:00:02.864) 0:01:30.107 **** 2025-09-18 10:53:58.548755 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.548766 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.548778 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.548789 | orchestrator | 2025-09-18 10:53:58.548800 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-09-18 10:53:58.548812 | orchestrator | Thursday 18 September 2025 10:52:04 +0000 (0:00:00.361) 0:01:30.468 **** 2025-09-18 10:53:58.548823 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.548836 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.548847 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548859 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.548991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549011 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.549027 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.549039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.549051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549112 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.549124 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-09-18 10:53:58.549140 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-09-18 10:53:58.549152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549199 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-09-18 10:53:58.549211 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.549222 | orchestrator | 2025-09-18 10:53:58.549233 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-09-18 10:53:58.549244 | orchestrator | Thursday 18 September 2025 10:52:06 +0000 (0:00:01.644) 0:01:32.113 **** 2025-09-18 10:53:58.549255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.549319 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.549333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-api:2024.2', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-09-18 10:53:58.549344 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-backend-bind9:2024.2', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549397 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549409 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549421 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-central:2024.2', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549432 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549449 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549467 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-mdns:2024.2', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549494 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549505 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-producer:2024.2', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549609 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/designate-worker:2024.2', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-09-18 10:53:58.549624 | orchestrator | 2025-09-18 10:53:58.549635 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-09-18 10:53:58.549646 | orchestrator | Thursday 18 September 2025 10:52:12 +0000 (0:00:05.618) 0:01:37.731 **** 2025-09-18 10:53:58.549657 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:53:58.549668 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:53:58.549679 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:53:58.549690 | orchestrator | 2025-09-18 10:53:58.549701 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-09-18 10:53:58.549712 | orchestrator | Thursday 18 September 2025 10:52:12 +0000 (0:00:00.462) 0:01:38.194 **** 2025-09-18 10:53:58.549723 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-09-18 10:53:58.549734 | orchestrator | 2025-09-18 10:53:58.549745 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-09-18 10:53:58.549756 | orchestrator | Thursday 18 September 2025 10:52:15 +0000 (0:00:02.502) 0:01:40.696 **** 2025-09-18 10:53:58.549767 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 10:53:58.549778 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-09-18 10:53:58.549789 | orchestrator | 2025-09-18 10:53:58.549800 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-09-18 10:53:58.549811 | orchestrator | Thursday 18 September 2025 10:52:17 +0000 (0:00:02.782) 0:01:43.479 **** 2025-09-18 10:53:58.549822 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.549833 | orchestrator | 2025-09-18 10:53:58.549843 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 10:53:58.549854 | orchestrator | Thursday 18 September 2025 10:52:32 +0000 (0:00:14.906) 0:01:58.386 **** 2025-09-18 10:53:58.549865 | orchestrator | 2025-09-18 10:53:58.549876 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 10:53:58.549887 | orchestrator | Thursday 18 September 2025 10:52:33 +0000 (0:00:00.396) 0:01:58.783 **** 2025-09-18 10:53:58.549898 | orchestrator | 2025-09-18 10:53:58.549909 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-09-18 10:53:58.549920 | orchestrator | Thursday 18 September 2025 10:52:33 +0000 (0:00:00.132) 0:01:58.915 **** 2025-09-18 10:53:58.549931 | orchestrator | 2025-09-18 10:53:58.549941 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-09-18 10:53:58.549952 | orchestrator | Thursday 18 September 2025 10:52:33 +0000 (0:00:00.083) 0:01:58.999 **** 2025-09-18 10:53:58.549968 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.549979 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.549991 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550002 | orchestrator | 2025-09-18 10:53:58.550077 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-09-18 10:53:58.550093 | orchestrator | Thursday 18 September 2025 10:52:50 +0000 (0:00:17.038) 0:02:16.038 **** 2025-09-18 10:53:58.550104 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550116 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550127 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.550137 | orchestrator | 2025-09-18 10:53:58.550149 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-09-18 10:53:58.550168 | orchestrator | Thursday 18 September 2025 10:53:03 +0000 (0:00:12.753) 0:02:28.791 **** 2025-09-18 10:53:58.550179 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.550191 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550202 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550213 | orchestrator | 2025-09-18 10:53:58.550224 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-09-18 10:53:58.550235 | orchestrator | Thursday 18 September 2025 10:53:14 +0000 (0:00:11.554) 0:02:40.346 **** 2025-09-18 10:53:58.550246 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550257 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550290 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.550301 | orchestrator | 2025-09-18 10:53:58.550312 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-09-18 10:53:58.550323 | orchestrator | Thursday 18 September 2025 10:53:29 +0000 (0:00:14.767) 0:02:55.113 **** 2025-09-18 10:53:58.550334 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550345 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550356 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.550367 | orchestrator | 2025-09-18 10:53:58.550378 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-09-18 10:53:58.550389 | orchestrator | Thursday 18 September 2025 10:53:35 +0000 (0:00:06.338) 0:03:01.452 **** 2025-09-18 10:53:58.550400 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550410 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:53:58.550421 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:53:58.550432 | orchestrator | 2025-09-18 10:53:58.550443 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-09-18 10:53:58.550454 | orchestrator | Thursday 18 September 2025 10:53:48 +0000 (0:00:12.387) 0:03:13.839 **** 2025-09-18 10:53:58.550465 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:53:58.550476 | orchestrator | 2025-09-18 10:53:58.550486 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:53:58.550498 | orchestrator | testbed-node-0 : ok=29  changed=23  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 10:53:58.550509 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:53:58.550521 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:53:58.550532 | orchestrator | 2025-09-18 10:53:58.550543 | orchestrator | 2025-09-18 10:53:58.550561 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:53:58.550573 | orchestrator | Thursday 18 September 2025 10:53:55 +0000 (0:00:07.285) 0:03:21.125 **** 2025-09-18 10:53:58.550584 | orchestrator | =============================================================================== 2025-09-18 10:53:58.550595 | orchestrator | designate : Copying over designate.conf -------------------------------- 20.07s 2025-09-18 10:53:58.550606 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 17.04s 2025-09-18 10:53:58.550617 | orchestrator | designate : Running Designate bootstrap container ---------------------- 14.91s 2025-09-18 10:53:58.550628 | orchestrator | designate : Restart designate-producer container ----------------------- 14.77s 2025-09-18 10:53:58.550639 | orchestrator | designate : Restart designate-api container ---------------------------- 12.75s 2025-09-18 10:53:58.550649 | orchestrator | designate : Restart designate-worker container ------------------------- 12.39s 2025-09-18 10:53:58.550660 | orchestrator | designate : Restart designate-central container ------------------------ 11.55s 2025-09-18 10:53:58.550671 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 8.37s 2025-09-18 10:53:58.550682 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 7.29s 2025-09-18 10:53:58.550703 | orchestrator | designate : Copying over config.json files for services ----------------- 7.04s 2025-09-18 10:53:58.550714 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.87s 2025-09-18 10:53:58.550725 | orchestrator | designate : Restart designate-mdns container ---------------------------- 6.34s 2025-09-18 10:53:58.550736 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.73s 2025-09-18 10:53:58.550746 | orchestrator | designate : Check designate containers ---------------------------------- 5.62s 2025-09-18 10:53:58.550757 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 5.52s 2025-09-18 10:53:58.550768 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.67s 2025-09-18 10:53:58.550779 | orchestrator | service-ks-register : designate | Creating services --------------------- 4.30s 2025-09-18 10:53:58.550790 | orchestrator | service-ks-register : designate | Creating users ------------------------ 4.30s 2025-09-18 10:53:58.550800 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.80s 2025-09-18 10:53:58.550811 | orchestrator | service-ks-register : designate | Creating roles ------------------------ 3.52s 2025-09-18 10:53:58.550827 | orchestrator | 2025-09-18 10:53:58 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:53:58.550838 | orchestrator | 2025-09-18 10:53:58 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:53:58.550849 | orchestrator | 2025-09-18 10:53:58 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:53:58.550860 | orchestrator | 2025-09-18 10:53:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:01.590182 | orchestrator | 2025-09-18 10:54:01 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:01.590697 | orchestrator | 2025-09-18 10:54:01 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:01.592455 | orchestrator | 2025-09-18 10:54:01 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:01.595571 | orchestrator | 2025-09-18 10:54:01 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:01.596136 | orchestrator | 2025-09-18 10:54:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:04.638144 | orchestrator | 2025-09-18 10:54:04 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:04.639839 | orchestrator | 2025-09-18 10:54:04 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:04.642738 | orchestrator | 2025-09-18 10:54:04 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:04.644429 | orchestrator | 2025-09-18 10:54:04 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:04.644708 | orchestrator | 2025-09-18 10:54:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:07.692385 | orchestrator | 2025-09-18 10:54:07 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:07.693646 | orchestrator | 2025-09-18 10:54:07 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:07.695204 | orchestrator | 2025-09-18 10:54:07 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:07.696519 | orchestrator | 2025-09-18 10:54:07 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:07.696747 | orchestrator | 2025-09-18 10:54:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:10.739724 | orchestrator | 2025-09-18 10:54:10 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:10.740828 | orchestrator | 2025-09-18 10:54:10 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:10.742307 | orchestrator | 2025-09-18 10:54:10 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:10.743645 | orchestrator | 2025-09-18 10:54:10 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:10.743671 | orchestrator | 2025-09-18 10:54:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:13.788337 | orchestrator | 2025-09-18 10:54:13 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:13.790940 | orchestrator | 2025-09-18 10:54:13 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:13.794085 | orchestrator | 2025-09-18 10:54:13 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:13.796182 | orchestrator | 2025-09-18 10:54:13 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:13.796208 | orchestrator | 2025-09-18 10:54:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:16.834893 | orchestrator | 2025-09-18 10:54:16 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:16.836278 | orchestrator | 2025-09-18 10:54:16 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:16.838536 | orchestrator | 2025-09-18 10:54:16 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:16.840512 | orchestrator | 2025-09-18 10:54:16 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:16.840532 | orchestrator | 2025-09-18 10:54:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:19.896798 | orchestrator | 2025-09-18 10:54:19 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:19.901201 | orchestrator | 2025-09-18 10:54:19 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:19.906375 | orchestrator | 2025-09-18 10:54:19 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:19.911969 | orchestrator | 2025-09-18 10:54:19 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:19.912503 | orchestrator | 2025-09-18 10:54:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:22.955768 | orchestrator | 2025-09-18 10:54:22 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:22.957860 | orchestrator | 2025-09-18 10:54:22 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:22.959025 | orchestrator | 2025-09-18 10:54:22 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:22.962663 | orchestrator | 2025-09-18 10:54:22 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:22.963307 | orchestrator | 2025-09-18 10:54:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:25.998279 | orchestrator | 2025-09-18 10:54:25 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:25.999176 | orchestrator | 2025-09-18 10:54:26 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:26.004811 | orchestrator | 2025-09-18 10:54:26 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:26.007546 | orchestrator | 2025-09-18 10:54:26 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:26.007572 | orchestrator | 2025-09-18 10:54:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:29.032849 | orchestrator | 2025-09-18 10:54:29 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:29.035627 | orchestrator | 2025-09-18 10:54:29 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:29.037312 | orchestrator | 2025-09-18 10:54:29 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:29.039704 | orchestrator | 2025-09-18 10:54:29 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:29.039849 | orchestrator | 2025-09-18 10:54:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:32.069398 | orchestrator | 2025-09-18 10:54:32 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:32.072685 | orchestrator | 2025-09-18 10:54:32 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:32.074583 | orchestrator | 2025-09-18 10:54:32 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:32.075845 | orchestrator | 2025-09-18 10:54:32 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:32.075865 | orchestrator | 2025-09-18 10:54:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:35.112507 | orchestrator | 2025-09-18 10:54:35 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:35.113019 | orchestrator | 2025-09-18 10:54:35 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:35.118370 | orchestrator | 2025-09-18 10:54:35 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:35.119677 | orchestrator | 2025-09-18 10:54:35 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:35.120401 | orchestrator | 2025-09-18 10:54:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:38.157147 | orchestrator | 2025-09-18 10:54:38 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:38.158127 | orchestrator | 2025-09-18 10:54:38 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:38.158829 | orchestrator | 2025-09-18 10:54:38 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:38.159511 | orchestrator | 2025-09-18 10:54:38 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:38.159532 | orchestrator | 2025-09-18 10:54:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:41.195288 | orchestrator | 2025-09-18 10:54:41 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:41.196667 | orchestrator | 2025-09-18 10:54:41 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:41.198337 | orchestrator | 2025-09-18 10:54:41 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:41.200468 | orchestrator | 2025-09-18 10:54:41 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:41.200489 | orchestrator | 2025-09-18 10:54:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:44.247395 | orchestrator | 2025-09-18 10:54:44 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:44.249696 | orchestrator | 2025-09-18 10:54:44 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:44.250972 | orchestrator | 2025-09-18 10:54:44 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:44.252663 | orchestrator | 2025-09-18 10:54:44 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:44.252685 | orchestrator | 2025-09-18 10:54:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:47.299478 | orchestrator | 2025-09-18 10:54:47 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:47.299805 | orchestrator | 2025-09-18 10:54:47 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:47.300506 | orchestrator | 2025-09-18 10:54:47 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:47.301639 | orchestrator | 2025-09-18 10:54:47 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:47.301668 | orchestrator | 2025-09-18 10:54:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:50.356301 | orchestrator | 2025-09-18 10:54:50 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:50.360420 | orchestrator | 2025-09-18 10:54:50 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:50.360448 | orchestrator | 2025-09-18 10:54:50 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:50.362336 | orchestrator | 2025-09-18 10:54:50 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:50.363371 | orchestrator | 2025-09-18 10:54:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:53.403079 | orchestrator | 2025-09-18 10:54:53 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:53.406601 | orchestrator | 2025-09-18 10:54:53 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:53.408113 | orchestrator | 2025-09-18 10:54:53 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:53.410404 | orchestrator | 2025-09-18 10:54:53 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:53.410425 | orchestrator | 2025-09-18 10:54:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:56.458527 | orchestrator | 2025-09-18 10:54:56 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:56.458884 | orchestrator | 2025-09-18 10:54:56 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:56.462415 | orchestrator | 2025-09-18 10:54:56 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:56.463250 | orchestrator | 2025-09-18 10:54:56 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:56.463271 | orchestrator | 2025-09-18 10:54:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:54:59.514383 | orchestrator | 2025-09-18 10:54:59 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:54:59.516680 | orchestrator | 2025-09-18 10:54:59 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:54:59.518944 | orchestrator | 2025-09-18 10:54:59 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:54:59.521196 | orchestrator | 2025-09-18 10:54:59 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:54:59.521391 | orchestrator | 2025-09-18 10:54:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:02.562334 | orchestrator | 2025-09-18 10:55:02 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:02.566230 | orchestrator | 2025-09-18 10:55:02 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:02.566791 | orchestrator | 2025-09-18 10:55:02 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:55:02.568906 | orchestrator | 2025-09-18 10:55:02 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:55:02.568926 | orchestrator | 2025-09-18 10:55:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:05.624042 | orchestrator | 2025-09-18 10:55:05 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:05.625806 | orchestrator | 2025-09-18 10:55:05 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:05.627855 | orchestrator | 2025-09-18 10:55:05 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state STARTED 2025-09-18 10:55:05.629613 | orchestrator | 2025-09-18 10:55:05 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state STARTED 2025-09-18 10:55:05.629786 | orchestrator | 2025-09-18 10:55:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:08.683676 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:08.686278 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:08.688362 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:08.689428 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:08.693857 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task 3cea9355-0a8c-4256-ae46-c45e12c79f62 is in state SUCCESS 2025-09-18 10:55:08.696707 | orchestrator | 2025-09-18 10:55:08.696739 | orchestrator | 2025-09-18 10:55:08.696751 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:55:08.696762 | orchestrator | 2025-09-18 10:55:08.696774 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:55:08.696785 | orchestrator | Thursday 18 September 2025 10:50:33 +0000 (0:00:00.254) 0:00:00.254 **** 2025-09-18 10:55:08.696797 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:08.696808 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:08.696820 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:08.696831 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:55:08.696842 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:55:08.696852 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:55:08.696864 | orchestrator | 2025-09-18 10:55:08.696875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:55:08.696886 | orchestrator | Thursday 18 September 2025 10:50:34 +0000 (0:00:00.924) 0:00:01.179 **** 2025-09-18 10:55:08.696897 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-09-18 10:55:08.696909 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-09-18 10:55:08.696920 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-09-18 10:55:08.696931 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-09-18 10:55:08.696943 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-09-18 10:55:08.696954 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-09-18 10:55:08.696965 | orchestrator | 2025-09-18 10:55:08.696976 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-09-18 10:55:08.696987 | orchestrator | 2025-09-18 10:55:08.696998 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 10:55:08.697010 | orchestrator | Thursday 18 September 2025 10:50:35 +0000 (0:00:00.815) 0:00:01.994 **** 2025-09-18 10:55:08.697021 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:55:08.697054 | orchestrator | 2025-09-18 10:55:08.697066 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-09-18 10:55:08.697077 | orchestrator | Thursday 18 September 2025 10:50:37 +0000 (0:00:01.766) 0:00:03.761 **** 2025-09-18 10:55:08.697088 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:08.697099 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:08.697110 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:55:08.697121 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:08.697132 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:55:08.697143 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:55:08.697154 | orchestrator | 2025-09-18 10:55:08.697165 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-09-18 10:55:08.697176 | orchestrator | Thursday 18 September 2025 10:50:38 +0000 (0:00:01.216) 0:00:04.978 **** 2025-09-18 10:55:08.697187 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:08.697216 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:08.697227 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:08.697238 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:55:08.697249 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:55:08.697259 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:55:08.697270 | orchestrator | 2025-09-18 10:55:08.697281 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-09-18 10:55:08.697292 | orchestrator | Thursday 18 September 2025 10:50:39 +0000 (0:00:01.124) 0:00:06.102 **** 2025-09-18 10:55:08.697304 | orchestrator | ok: [testbed-node-0] => { 2025-09-18 10:55:08.697318 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697331 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697343 | orchestrator | } 2025-09-18 10:55:08.697356 | orchestrator | ok: [testbed-node-1] => { 2025-09-18 10:55:08.697369 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697381 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697393 | orchestrator | } 2025-09-18 10:55:08.697406 | orchestrator | ok: [testbed-node-2] => { 2025-09-18 10:55:08.697419 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697431 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697445 | orchestrator | } 2025-09-18 10:55:08.697464 | orchestrator | ok: [testbed-node-3] => { 2025-09-18 10:55:08.697475 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697486 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697497 | orchestrator | } 2025-09-18 10:55:08.697508 | orchestrator | ok: [testbed-node-4] => { 2025-09-18 10:55:08.697520 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697531 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697542 | orchestrator | } 2025-09-18 10:55:08.697553 | orchestrator | ok: [testbed-node-5] => { 2025-09-18 10:55:08.697564 | orchestrator |  "changed": false, 2025-09-18 10:55:08.697575 | orchestrator |  "msg": "All assertions passed" 2025-09-18 10:55:08.697586 | orchestrator | } 2025-09-18 10:55:08.697597 | orchestrator | 2025-09-18 10:55:08.697608 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-09-18 10:55:08.697619 | orchestrator | Thursday 18 September 2025 10:50:40 +0000 (0:00:00.725) 0:00:06.828 **** 2025-09-18 10:55:08.697630 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.697641 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.697652 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.697663 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.697673 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.697684 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.697695 | orchestrator | 2025-09-18 10:55:08.697706 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-09-18 10:55:08.697717 | orchestrator | Thursday 18 September 2025 10:50:41 +0000 (0:00:00.554) 0:00:07.383 **** 2025-09-18 10:55:08.697728 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-09-18 10:55:08.697739 | orchestrator | 2025-09-18 10:55:08.697750 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-09-18 10:55:08.697769 | orchestrator | Thursday 18 September 2025 10:50:45 +0000 (0:00:03.976) 0:00:11.360 **** 2025-09-18 10:55:08.697780 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-09-18 10:55:08.697792 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-09-18 10:55:08.697802 | orchestrator | 2025-09-18 10:55:08.697831 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-09-18 10:55:08.697843 | orchestrator | Thursday 18 September 2025 10:50:52 +0000 (0:00:07.134) 0:00:18.495 **** 2025-09-18 10:55:08.697854 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:55:08.697865 | orchestrator | 2025-09-18 10:55:08.697876 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-09-18 10:55:08.697887 | orchestrator | Thursday 18 September 2025 10:50:55 +0000 (0:00:03.588) 0:00:22.084 **** 2025-09-18 10:55:08.697898 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:55:08.697909 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-09-18 10:55:08.697920 | orchestrator | 2025-09-18 10:55:08.697931 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-09-18 10:55:08.697942 | orchestrator | Thursday 18 September 2025 10:51:00 +0000 (0:00:04.456) 0:00:26.540 **** 2025-09-18 10:55:08.697953 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:55:08.697963 | orchestrator | 2025-09-18 10:55:08.697974 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-09-18 10:55:08.697985 | orchestrator | Thursday 18 September 2025 10:51:04 +0000 (0:00:03.947) 0:00:30.487 **** 2025-09-18 10:55:08.697996 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-09-18 10:55:08.698007 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-09-18 10:55:08.698068 | orchestrator | 2025-09-18 10:55:08.698082 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 10:55:08.698093 | orchestrator | Thursday 18 September 2025 10:51:12 +0000 (0:00:08.638) 0:00:39.126 **** 2025-09-18 10:55:08.698104 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.698115 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.698126 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.698137 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.698147 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.698158 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.698169 | orchestrator | 2025-09-18 10:55:08.698180 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-09-18 10:55:08.698205 | orchestrator | Thursday 18 September 2025 10:51:13 +0000 (0:00:00.855) 0:00:39.982 **** 2025-09-18 10:55:08.698217 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.698228 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.698239 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.698250 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.698260 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.698271 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.698282 | orchestrator | 2025-09-18 10:55:08.698292 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-09-18 10:55:08.698303 | orchestrator | Thursday 18 September 2025 10:51:16 +0000 (0:00:02.957) 0:00:42.939 **** 2025-09-18 10:55:08.698314 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:08.698325 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:08.698336 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:08.698347 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:55:08.698358 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:55:08.698369 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:55:08.698379 | orchestrator | 2025-09-18 10:55:08.698390 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-09-18 10:55:08.698401 | orchestrator | Thursday 18 September 2025 10:51:18 +0000 (0:00:01.981) 0:00:44.921 **** 2025-09-18 10:55:08.698420 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.698431 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.698442 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.698453 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.698464 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.698474 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.698485 | orchestrator | 2025-09-18 10:55:08.698496 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-09-18 10:55:08.698507 | orchestrator | Thursday 18 September 2025 10:51:21 +0000 (0:00:02.888) 0:00:47.809 **** 2025-09-18 10:55:08.698526 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698556 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698581 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698605 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698617 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698629 | orchestrator | 2025-09-18 10:55:08.698640 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-09-18 10:55:08.698651 | orchestrator | Thursday 18 September 2025 10:51:26 +0000 (0:00:04.678) 0:00:52.487 **** 2025-09-18 10:55:08.698662 | orchestrator | [WARNING]: Skipped 2025-09-18 10:55:08.698674 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-09-18 10:55:08.698685 | orchestrator | due to this access issue: 2025-09-18 10:55:08.698696 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-09-18 10:55:08.698707 | orchestrator | a directory 2025-09-18 10:55:08.698718 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:55:08.698729 | orchestrator | 2025-09-18 10:55:08.698740 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 10:55:08.698758 | orchestrator | Thursday 18 September 2025 10:51:27 +0000 (0:00:01.113) 0:00:53.601 **** 2025-09-18 10:55:08.698770 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:55:08.698782 | orchestrator | 2025-09-18 10:55:08.698793 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-09-18 10:55:08.698804 | orchestrator | Thursday 18 September 2025 10:51:28 +0000 (0:00:01.489) 0:00:55.091 **** 2025-09-18 10:55:08.698816 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698853 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.698866 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698885 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698897 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.698915 | orchestrator | 2025-09-18 10:55:08.698927 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-09-18 10:55:08.698938 | orchestrator | Thursday 18 September 2025 10:51:32 +0000 (0:00:04.165) 0:00:59.256 **** 2025-09-18 10:55:08.698950 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.698961 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.698977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.698990 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.699002 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699020 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699031 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.699042 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.699054 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699072 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.699083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699094 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.699105 | orchestrator | 2025-09-18 10:55:08.699116 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-09-18 10:55:08.699127 | orchestrator | Thursday 18 September 2025 10:51:36 +0000 (0:00:03.381) 0:01:02.638 **** 2025-09-18 10:55:08.699143 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699155 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.699173 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699185 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.699229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699248 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.699259 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699271 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.699286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699298 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.699309 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699321 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.699332 | orchestrator | 2025-09-18 10:55:08.699343 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-09-18 10:55:08.699354 | orchestrator | Thursday 18 September 2025 10:51:39 +0000 (0:00:03.150) 0:01:05.788 **** 2025-09-18 10:55:08.699365 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.699376 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.699387 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.699397 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.699408 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.699419 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.699430 | orchestrator | 2025-09-18 10:55:08.699441 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-09-18 10:55:08.699464 | orchestrator | Thursday 18 September 2025 10:51:42 +0000 (0:00:03.280) 0:01:09.069 **** 2025-09-18 10:55:08.699475 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.699486 | orchestrator | 2025-09-18 10:55:08.699498 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-09-18 10:55:08.699509 | orchestrator | Thursday 18 September 2025 10:51:42 +0000 (0:00:00.152) 0:01:09.221 **** 2025-09-18 10:55:08.699520 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.699531 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.699542 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.699553 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.699564 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.699575 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.699585 | orchestrator | 2025-09-18 10:55:08.699596 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-09-18 10:55:08.699607 | orchestrator | Thursday 18 September 2025 10:51:43 +0000 (0:00:00.654) 0:01:09.876 **** 2025-09-18 10:55:08.699619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699631 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.699642 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699654 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.699669 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699681 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.699699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.699716 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.699728 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699739 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.699750 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.699762 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.699773 | orchestrator | 2025-09-18 10:55:08.699785 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-09-18 10:55:08.699795 | orchestrator | Thursday 18 September 2025 10:51:46 +0000 (0:00:03.293) 0:01:13.169 **** 2025-09-18 10:55:08.699811 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.699823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.699849 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.699861 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.699873 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.699885 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.699897 | orchestrator | 2025-09-18 10:55:08.699912 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-09-18 10:55:08.699924 | orchestrator | Thursday 18 September 2025 10:51:52 +0000 (0:00:05.581) 0:01:18.751 **** 2025-09-18 10:55:08.699935 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.699959 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.699971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.699982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.699999 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.700021 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.700033 | orchestrator | 2025-09-18 10:55:08.700044 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-09-18 10:55:08.700055 | orchestrator | Thursday 18 September 2025 10:51:59 +0000 (0:00:07.084) 0:01:25.836 **** 2025-09-18 10:55:08.700073 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.700085 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700096 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700108 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700119 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.700131 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700147 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700165 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700176 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.700188 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700222 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700234 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700245 | orchestrator | 2025-09-18 10:55:08.700256 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-09-18 10:55:08.700267 | orchestrator | Thursday 18 September 2025 10:52:02 +0000 (0:00:02.770) 0:01:28.606 **** 2025-09-18 10:55:08.700278 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700289 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700300 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700311 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.700322 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:08.700333 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:08.700344 | orchestrator | 2025-09-18 10:55:08.700356 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-09-18 10:55:08.700367 | orchestrator | Thursday 18 September 2025 10:52:05 +0000 (0:00:02.915) 0:01:31.521 **** 2025-09-18 10:55:08.700379 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700397 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700413 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700425 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700436 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.700448 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700465 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.700478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.700490 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.700507 | orchestrator | 2025-09-18 10:55:08.700519 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-09-18 10:55:08.700530 | orchestrator | Thursday 18 September 2025 10:52:09 +0000 (0:00:03.879) 0:01:35.401 **** 2025-09-18 10:55:08.700541 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700552 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700563 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700574 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700585 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700596 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700607 | orchestrator | 2025-09-18 10:55:08.700623 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-09-18 10:55:08.700634 | orchestrator | Thursday 18 September 2025 10:52:12 +0000 (0:00:02.960) 0:01:38.361 **** 2025-09-18 10:55:08.700645 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700656 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700666 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700677 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700688 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700699 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700710 | orchestrator | 2025-09-18 10:55:08.700721 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-09-18 10:55:08.700732 | orchestrator | Thursday 18 September 2025 10:52:14 +0000 (0:00:02.149) 0:01:40.510 **** 2025-09-18 10:55:08.700742 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700753 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700764 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700775 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700786 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700797 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700808 | orchestrator | 2025-09-18 10:55:08.700818 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-09-18 10:55:08.700829 | orchestrator | Thursday 18 September 2025 10:52:16 +0000 (0:00:02.493) 0:01:43.003 **** 2025-09-18 10:55:08.700840 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700851 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700862 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700873 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.700884 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700895 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700905 | orchestrator | 2025-09-18 10:55:08.700916 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-09-18 10:55:08.700927 | orchestrator | Thursday 18 September 2025 10:52:19 +0000 (0:00:02.988) 0:01:45.991 **** 2025-09-18 10:55:08.700938 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.700949 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.700960 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.700971 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.700988 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.700999 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701010 | orchestrator | 2025-09-18 10:55:08.701021 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-09-18 10:55:08.701033 | orchestrator | Thursday 18 September 2025 10:52:21 +0000 (0:00:02.132) 0:01:48.124 **** 2025-09-18 10:55:08.701044 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701061 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701072 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701083 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701094 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701105 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701116 | orchestrator | 2025-09-18 10:55:08.701127 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-09-18 10:55:08.701138 | orchestrator | Thursday 18 September 2025 10:52:24 +0000 (0:00:02.400) 0:01:50.524 **** 2025-09-18 10:55:08.701149 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701160 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701171 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701182 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701210 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701221 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701232 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701243 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701254 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701265 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701276 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-09-18 10:55:08.701287 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701298 | orchestrator | 2025-09-18 10:55:08.701309 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-09-18 10:55:08.701320 | orchestrator | Thursday 18 September 2025 10:52:27 +0000 (0:00:03.097) 0:01:53.622 **** 2025-09-18 10:55:08.701331 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701342 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701359 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701371 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701405 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701429 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701440 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701452 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701463 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701475 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701486 | orchestrator | 2025-09-18 10:55:08.701501 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-09-18 10:55:08.701512 | orchestrator | Thursday 18 September 2025 10:52:29 +0000 (0:00:02.348) 0:01:55.971 **** 2025-09-18 10:55:08.701524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701541 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701559 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701572 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701583 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701595 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701606 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.701618 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701651 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701662 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.701673 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701684 | orchestrator | 2025-09-18 10:55:08.701695 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-09-18 10:55:08.701706 | orchestrator | Thursday 18 September 2025 10:52:31 +0000 (0:00:02.351) 0:01:58.322 **** 2025-09-18 10:55:08.701718 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701734 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701745 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701756 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701767 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701778 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701789 | orchestrator | 2025-09-18 10:55:08.701800 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-09-18 10:55:08.701812 | orchestrator | Thursday 18 September 2025 10:52:34 +0000 (0:00:02.101) 0:02:00.424 **** 2025-09-18 10:55:08.701823 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701834 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701845 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701856 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:55:08.701866 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:55:08.701877 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:55:08.701888 | orchestrator | 2025-09-18 10:55:08.701899 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-09-18 10:55:08.701910 | orchestrator | Thursday 18 September 2025 10:52:41 +0000 (0:00:07.337) 0:02:07.762 **** 2025-09-18 10:55:08.701921 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.701932 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.701943 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.701954 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.701965 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.701976 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.701987 | orchestrator | 2025-09-18 10:55:08.701998 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-09-18 10:55:08.702009 | orchestrator | Thursday 18 September 2025 10:52:43 +0000 (0:00:02.495) 0:02:10.258 **** 2025-09-18 10:55:08.702046 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702057 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702068 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702079 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702090 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702101 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702112 | orchestrator | 2025-09-18 10:55:08.702123 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-09-18 10:55:08.702134 | orchestrator | Thursday 18 September 2025 10:52:45 +0000 (0:00:02.076) 0:02:12.334 **** 2025-09-18 10:55:08.702145 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702156 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702167 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702178 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702202 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702220 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702231 | orchestrator | 2025-09-18 10:55:08.702242 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-09-18 10:55:08.702253 | orchestrator | Thursday 18 September 2025 10:52:47 +0000 (0:00:01.754) 0:02:14.089 **** 2025-09-18 10:55:08.702264 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702275 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702286 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702297 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702308 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702319 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702330 | orchestrator | 2025-09-18 10:55:08.702341 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-09-18 10:55:08.702352 | orchestrator | Thursday 18 September 2025 10:52:49 +0000 (0:00:02.173) 0:02:16.263 **** 2025-09-18 10:55:08.702363 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702374 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702386 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702396 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702407 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702418 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702430 | orchestrator | 2025-09-18 10:55:08.702441 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-09-18 10:55:08.702452 | orchestrator | Thursday 18 September 2025 10:52:53 +0000 (0:00:03.487) 0:02:19.750 **** 2025-09-18 10:55:08.702471 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702482 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702493 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702504 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702515 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702526 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702537 | orchestrator | 2025-09-18 10:55:08.702548 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-09-18 10:55:08.702560 | orchestrator | Thursday 18 September 2025 10:52:56 +0000 (0:00:02.860) 0:02:22.611 **** 2025-09-18 10:55:08.702571 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702582 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702593 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702604 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702615 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702626 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702637 | orchestrator | 2025-09-18 10:55:08.702648 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-09-18 10:55:08.702659 | orchestrator | Thursday 18 September 2025 10:52:57 +0000 (0:00:01.629) 0:02:24.241 **** 2025-09-18 10:55:08.702670 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702681 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702693 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702704 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702715 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702726 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702737 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702748 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.702826 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702841 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.702852 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-09-18 10:55:08.702869 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.702881 | orchestrator | 2025-09-18 10:55:08.702892 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-09-18 10:55:08.702902 | orchestrator | Thursday 18 September 2025 10:52:59 +0000 (0:00:02.002) 0:02:26.243 **** 2025-09-18 10:55:08.702914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.702925 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.702937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.702948 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.702965 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-09-18 10:55:08.702977 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.702988 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.702999 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.703023 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.703035 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.703046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-09-18 10:55:08.703058 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.703069 | orchestrator | 2025-09-18 10:55:08.703080 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-09-18 10:55:08.703091 | orchestrator | Thursday 18 September 2025 10:53:01 +0000 (0:00:01.799) 0:02:28.043 **** 2025-09-18 10:55:08.703103 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.703119 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.703136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.703154 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/neutron-server:2024.2', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-09-18 10:55:08.703166 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.703178 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/neutron-metadata-agent:2024.2', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-09-18 10:55:08.703235 | orchestrator | 2025-09-18 10:55:08.703249 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-09-18 10:55:08.703260 | orchestrator | Thursday 18 September 2025 10:53:05 +0000 (0:00:03.897) 0:02:31.941 **** 2025-09-18 10:55:08.703271 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.703282 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.703299 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.703310 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:55:08.703321 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:55:08.703332 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:55:08.703343 | orchestrator | 2025-09-18 10:55:08.703354 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-09-18 10:55:08.703365 | orchestrator | Thursday 18 September 2025 10:53:06 +0000 (0:00:00.774) 0:02:32.716 **** 2025-09-18 10:55:08.703376 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.703387 | orchestrator | 2025-09-18 10:55:08.703398 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-09-18 10:55:08.703410 | orchestrator | Thursday 18 September 2025 10:53:08 +0000 (0:00:02.271) 0:02:34.987 **** 2025-09-18 10:55:08.703427 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.703438 | orchestrator | 2025-09-18 10:55:08.703449 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-09-18 10:55:08.703460 | orchestrator | Thursday 18 September 2025 10:53:10 +0000 (0:00:02.201) 0:02:37.189 **** 2025-09-18 10:55:08.703471 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.703483 | orchestrator | 2025-09-18 10:55:08.703494 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703505 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:39.241) 0:03:16.430 **** 2025-09-18 10:55:08.703516 | orchestrator | 2025-09-18 10:55:08.703527 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703538 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.065) 0:03:16.496 **** 2025-09-18 10:55:08.703549 | orchestrator | 2025-09-18 10:55:08.703561 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703571 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.319) 0:03:16.815 **** 2025-09-18 10:55:08.703582 | orchestrator | 2025-09-18 10:55:08.703594 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703605 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.066) 0:03:16.881 **** 2025-09-18 10:55:08.703615 | orchestrator | 2025-09-18 10:55:08.703632 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703644 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.072) 0:03:16.954 **** 2025-09-18 10:55:08.703655 | orchestrator | 2025-09-18 10:55:08.703666 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-09-18 10:55:08.703677 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.065) 0:03:17.020 **** 2025-09-18 10:55:08.703688 | orchestrator | 2025-09-18 10:55:08.703698 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-09-18 10:55:08.703709 | orchestrator | Thursday 18 September 2025 10:53:50 +0000 (0:00:00.072) 0:03:17.092 **** 2025-09-18 10:55:08.703720 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.703732 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:08.703743 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:08.703754 | orchestrator | 2025-09-18 10:55:08.703765 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-09-18 10:55:08.703776 | orchestrator | Thursday 18 September 2025 10:54:15 +0000 (0:00:24.773) 0:03:41.865 **** 2025-09-18 10:55:08.703786 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:55:08.703796 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:55:08.703805 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:55:08.703815 | orchestrator | 2025-09-18 10:55:08.703825 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:55:08.703835 | orchestrator | testbed-node-0 : ok=26  changed=15  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:55:08.703845 | orchestrator | testbed-node-1 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-18 10:55:08.703855 | orchestrator | testbed-node-2 : ok=16  changed=8  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-09-18 10:55:08.703865 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:55:08.703875 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:55:08.703885 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-09-18 10:55:08.703900 | orchestrator | 2025-09-18 10:55:08.703910 | orchestrator | 2025-09-18 10:55:08.703920 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:55:08.703930 | orchestrator | Thursday 18 September 2025 10:55:06 +0000 (0:00:50.558) 0:04:32.424 **** 2025-09-18 10:55:08.703940 | orchestrator | =============================================================================== 2025-09-18 10:55:08.703950 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 50.56s 2025-09-18 10:55:08.703960 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 39.24s 2025-09-18 10:55:08.703969 | orchestrator | neutron : Restart neutron-server container ----------------------------- 24.77s 2025-09-18 10:55:08.703979 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.64s 2025-09-18 10:55:08.703989 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 7.34s 2025-09-18 10:55:08.703998 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 7.13s 2025-09-18 10:55:08.704013 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.08s 2025-09-18 10:55:08.704023 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.58s 2025-09-18 10:55:08.704032 | orchestrator | neutron : Ensuring config directories exist ----------------------------- 4.68s 2025-09-18 10:55:08.704042 | orchestrator | service-ks-register : neutron | Creating users -------------------------- 4.46s 2025-09-18 10:55:08.704052 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 4.17s 2025-09-18 10:55:08.704061 | orchestrator | service-ks-register : neutron | Creating services ----------------------- 3.98s 2025-09-18 10:55:08.704071 | orchestrator | service-ks-register : neutron | Creating roles -------------------------- 3.95s 2025-09-18 10:55:08.704081 | orchestrator | neutron : Check neutron containers -------------------------------------- 3.90s 2025-09-18 10:55:08.704091 | orchestrator | neutron : Copying over ml2_conf.ini ------------------------------------- 3.88s 2025-09-18 10:55:08.704100 | orchestrator | service-ks-register : neutron | Creating projects ----------------------- 3.59s 2025-09-18 10:55:08.704110 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 3.49s 2025-09-18 10:55:08.704120 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS certificate --- 3.38s 2025-09-18 10:55:08.704130 | orchestrator | neutron : Copying over existing policy file ----------------------------- 3.29s 2025-09-18 10:55:08.704139 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 3.28s 2025-09-18 10:55:08.704149 | orchestrator | 2025-09-18 10:55:08.704159 | orchestrator | 2025-09-18 10:55:08.704168 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:55:08.704178 | orchestrator | 2025-09-18 10:55:08.704188 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:55:08.704210 | orchestrator | Thursday 18 September 2025 10:53:58 +0000 (0:00:00.197) 0:00:00.197 **** 2025-09-18 10:55:08.704220 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:08.704230 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:08.704240 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:08.704249 | orchestrator | 2025-09-18 10:55:08.704264 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:55:08.704273 | orchestrator | Thursday 18 September 2025 10:53:58 +0000 (0:00:00.236) 0:00:00.434 **** 2025-09-18 10:55:08.704283 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-09-18 10:55:08.704293 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-09-18 10:55:08.704302 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-09-18 10:55:08.704312 | orchestrator | 2025-09-18 10:55:08.704329 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-09-18 10:55:08.704347 | orchestrator | 2025-09-18 10:55:08.704361 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 10:55:08.704377 | orchestrator | Thursday 18 September 2025 10:53:59 +0000 (0:00:00.332) 0:00:00.766 **** 2025-09-18 10:55:08.704407 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:55:08.704423 | orchestrator | 2025-09-18 10:55:08.704439 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-09-18 10:55:08.704449 | orchestrator | Thursday 18 September 2025 10:53:59 +0000 (0:00:00.519) 0:00:01.286 **** 2025-09-18 10:55:08.704459 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-09-18 10:55:08.704468 | orchestrator | 2025-09-18 10:55:08.704478 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-09-18 10:55:08.704488 | orchestrator | Thursday 18 September 2025 10:54:03 +0000 (0:00:03.768) 0:00:05.055 **** 2025-09-18 10:55:08.704498 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-09-18 10:55:08.704508 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-09-18 10:55:08.704518 | orchestrator | 2025-09-18 10:55:08.704527 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-09-18 10:55:08.704537 | orchestrator | Thursday 18 September 2025 10:54:10 +0000 (0:00:07.247) 0:00:12.303 **** 2025-09-18 10:55:08.704546 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:55:08.704556 | orchestrator | 2025-09-18 10:55:08.704566 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-09-18 10:55:08.704576 | orchestrator | Thursday 18 September 2025 10:54:14 +0000 (0:00:03.480) 0:00:15.784 **** 2025-09-18 10:55:08.704585 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:55:08.704595 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-09-18 10:55:08.704605 | orchestrator | 2025-09-18 10:55:08.704615 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-09-18 10:55:08.704624 | orchestrator | Thursday 18 September 2025 10:54:18 +0000 (0:00:04.101) 0:00:19.885 **** 2025-09-18 10:55:08.704634 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:55:08.704643 | orchestrator | 2025-09-18 10:55:08.704653 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-09-18 10:55:08.704663 | orchestrator | Thursday 18 September 2025 10:54:21 +0000 (0:00:03.729) 0:00:23.615 **** 2025-09-18 10:55:08.704672 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-09-18 10:55:08.704682 | orchestrator | 2025-09-18 10:55:08.704692 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 10:55:08.704701 | orchestrator | Thursday 18 September 2025 10:54:26 +0000 (0:00:04.423) 0:00:28.038 **** 2025-09-18 10:55:08.704711 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.704721 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.704731 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.704740 | orchestrator | 2025-09-18 10:55:08.704750 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-09-18 10:55:08.704765 | orchestrator | Thursday 18 September 2025 10:54:26 +0000 (0:00:00.310) 0:00:28.349 **** 2025-09-18 10:55:08.704775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.704799 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.704810 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.704820 | orchestrator | 2025-09-18 10:55:08.704830 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-09-18 10:55:08.704840 | orchestrator | Thursday 18 September 2025 10:54:27 +0000 (0:00:00.915) 0:00:29.264 **** 2025-09-18 10:55:08.704849 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.704859 | orchestrator | 2025-09-18 10:55:08.704869 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-09-18 10:55:08.704879 | orchestrator | Thursday 18 September 2025 10:54:27 +0000 (0:00:00.125) 0:00:29.390 **** 2025-09-18 10:55:08.704889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.704898 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.704908 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.704918 | orchestrator | 2025-09-18 10:55:08.704927 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-09-18 10:55:08.704937 | orchestrator | Thursday 18 September 2025 10:54:28 +0000 (0:00:00.393) 0:00:29.783 **** 2025-09-18 10:55:08.704947 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:55:08.704957 | orchestrator | 2025-09-18 10:55:08.704967 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-09-18 10:55:08.704977 | orchestrator | Thursday 18 September 2025 10:54:28 +0000 (0:00:00.476) 0:00:30.260 **** 2025-09-18 10:55:08.704991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705023 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705034 | orchestrator | 2025-09-18 10:55:08.705044 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-09-18 10:55:08.705053 | orchestrator | Thursday 18 September 2025 10:54:29 +0000 (0:00:01.372) 0:00:31.632 **** 2025-09-18 10:55:08.705064 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705074 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.705091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705108 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.705118 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705128 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.705138 | orchestrator | 2025-09-18 10:55:08.705148 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-09-18 10:55:08.705158 | orchestrator | Thursday 18 September 2025 10:54:30 +0000 (0:00:00.716) 0:00:32.349 **** 2025-09-18 10:55:08.705173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705183 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.705208 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705219 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.705229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705249 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.705259 | orchestrator | 2025-09-18 10:55:08.705270 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-09-18 10:55:08.705280 | orchestrator | Thursday 18 September 2025 10:54:31 +0000 (0:00:00.641) 0:00:32.990 **** 2025-09-18 10:55:08.705290 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705316 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705326 | orchestrator | 2025-09-18 10:55:08.705336 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-09-18 10:55:08.705346 | orchestrator | Thursday 18 September 2025 10:54:32 +0000 (0:00:01.289) 0:00:34.280 **** 2025-09-18 10:55:08.705356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705376 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705402 | orchestrator | 2025-09-18 10:55:08.705412 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-09-18 10:55:08.705422 | orchestrator | Thursday 18 September 2025 10:54:35 +0000 (0:00:02.454) 0:00:36.735 **** 2025-09-18 10:55:08.705431 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 10:55:08.705441 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 10:55:08.705451 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-09-18 10:55:08.705461 | orchestrator | 2025-09-18 10:55:08.705471 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-09-18 10:55:08.705481 | orchestrator | Thursday 18 September 2025 10:54:36 +0000 (0:00:01.578) 0:00:38.313 **** 2025-09-18 10:55:08.705490 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.705500 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:08.705510 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:08.705520 | orchestrator | 2025-09-18 10:55:08.705530 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-09-18 10:55:08.705540 | orchestrator | Thursday 18 September 2025 10:54:37 +0000 (0:00:01.309) 0:00:39.622 **** 2025-09-18 10:55:08.705550 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705565 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:08.705579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705590 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:08.705600 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-09-18 10:55:08.705611 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:08.705621 | orchestrator | 2025-09-18 10:55:08.705630 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-09-18 10:55:08.705644 | orchestrator | Thursday 18 September 2025 10:54:38 +0000 (0:00:00.420) 0:00:40.043 **** 2025-09-18 10:55:08.705654 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705665 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705685 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/placement-api:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-09-18 10:55:08.705696 | orchestrator | 2025-09-18 10:55:08.705705 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-09-18 10:55:08.705716 | orchestrator | Thursday 18 September 2025 10:54:39 +0000 (0:00:01.121) 0:00:41.164 **** 2025-09-18 10:55:08.705726 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.705735 | orchestrator | 2025-09-18 10:55:08.705745 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-09-18 10:55:08.705755 | orchestrator | Thursday 18 September 2025 10:54:41 +0000 (0:00:02.357) 0:00:43.522 **** 2025-09-18 10:55:08.705765 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.705775 | orchestrator | 2025-09-18 10:55:08.705785 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-09-18 10:55:08.705794 | orchestrator | Thursday 18 September 2025 10:54:43 +0000 (0:00:01.805) 0:00:45.327 **** 2025-09-18 10:55:08.705804 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.705814 | orchestrator | 2025-09-18 10:55:08.705824 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 10:55:08.705833 | orchestrator | Thursday 18 September 2025 10:54:54 +0000 (0:00:11.110) 0:00:56.438 **** 2025-09-18 10:55:08.705843 | orchestrator | 2025-09-18 10:55:08.705853 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 10:55:08.705863 | orchestrator | Thursday 18 September 2025 10:54:54 +0000 (0:00:00.075) 0:00:56.514 **** 2025-09-18 10:55:08.705873 | orchestrator | 2025-09-18 10:55:08.705883 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-09-18 10:55:08.705892 | orchestrator | Thursday 18 September 2025 10:54:54 +0000 (0:00:00.079) 0:00:56.594 **** 2025-09-18 10:55:08.705902 | orchestrator | 2025-09-18 10:55:08.705912 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-09-18 10:55:08.705922 | orchestrator | Thursday 18 September 2025 10:54:54 +0000 (0:00:00.090) 0:00:56.685 **** 2025-09-18 10:55:08.705932 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:08.705942 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:08.705952 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:08.705966 | orchestrator | 2025-09-18 10:55:08.705976 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:55:08.705986 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:55:08.705996 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:55:08.706011 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:55:08.706047 | orchestrator | 2025-09-18 10:55:08.706057 | orchestrator | 2025-09-18 10:55:08.706067 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:55:08.706077 | orchestrator | Thursday 18 September 2025 10:55:05 +0000 (0:00:10.538) 0:01:07.223 **** 2025-09-18 10:55:08.706087 | orchestrator | =============================================================================== 2025-09-18 10:55:08.706096 | orchestrator | placement : Running placement bootstrap container ---------------------- 11.11s 2025-09-18 10:55:08.706106 | orchestrator | placement : Restart placement-api container ---------------------------- 10.54s 2025-09-18 10:55:08.706116 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 7.25s 2025-09-18 10:55:08.706126 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 4.42s 2025-09-18 10:55:08.706136 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.10s 2025-09-18 10:55:08.706146 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.77s 2025-09-18 10:55:08.706156 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.73s 2025-09-18 10:55:08.706165 | orchestrator | service-ks-register : placement | Creating projects --------------------- 3.48s 2025-09-18 10:55:08.706175 | orchestrator | placement : Copying over placement.conf --------------------------------- 2.45s 2025-09-18 10:55:08.706185 | orchestrator | placement : Creating placement databases -------------------------------- 2.36s 2025-09-18 10:55:08.706235 | orchestrator | placement : Creating placement databases user and setting permissions --- 1.81s 2025-09-18 10:55:08.706246 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 1.58s 2025-09-18 10:55:08.706255 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 1.37s 2025-09-18 10:55:08.706265 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 1.31s 2025-09-18 10:55:08.706275 | orchestrator | placement : Copying over config.json files for services ----------------- 1.29s 2025-09-18 10:55:08.706285 | orchestrator | placement : Check placement containers ---------------------------------- 1.12s 2025-09-18 10:55:08.706295 | orchestrator | placement : Ensuring config directories exist --------------------------- 0.92s 2025-09-18 10:55:08.706304 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS certificate --- 0.72s 2025-09-18 10:55:08.706314 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 0.64s 2025-09-18 10:55:08.706324 | orchestrator | placement : include_tasks ----------------------------------------------- 0.52s 2025-09-18 10:55:08.706334 | orchestrator | 2025-09-18 10:55:08 | INFO  | Task 0d391ef8-0ae8-4031-b3da-4b44eca90b2b is in state SUCCESS 2025-09-18 10:55:08.706344 | orchestrator | 2025-09-18 10:55:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:11.756127 | orchestrator | 2025-09-18 10:55:11 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:11.758285 | orchestrator | 2025-09-18 10:55:11 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:11.761100 | orchestrator | 2025-09-18 10:55:11 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:11.762638 | orchestrator | 2025-09-18 10:55:11 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:11.762839 | orchestrator | 2025-09-18 10:55:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:14.812553 | orchestrator | 2025-09-18 10:55:14 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:14.814622 | orchestrator | 2025-09-18 10:55:14 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:14.817253 | orchestrator | 2025-09-18 10:55:14 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:14.818944 | orchestrator | 2025-09-18 10:55:14 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:14.819204 | orchestrator | 2025-09-18 10:55:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:17.867721 | orchestrator | 2025-09-18 10:55:17 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:17.868674 | orchestrator | 2025-09-18 10:55:17 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:17.871766 | orchestrator | 2025-09-18 10:55:17 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:17.874825 | orchestrator | 2025-09-18 10:55:17 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:17.875151 | orchestrator | 2025-09-18 10:55:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:20.932975 | orchestrator | 2025-09-18 10:55:20 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:20.934919 | orchestrator | 2025-09-18 10:55:20 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:20.937475 | orchestrator | 2025-09-18 10:55:20 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:20.939002 | orchestrator | 2025-09-18 10:55:20 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:20.939303 | orchestrator | 2025-09-18 10:55:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:23.999050 | orchestrator | 2025-09-18 10:55:23 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:24.001246 | orchestrator | 2025-09-18 10:55:24 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:24.003499 | orchestrator | 2025-09-18 10:55:24 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:24.004378 | orchestrator | 2025-09-18 10:55:24 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:24.004523 | orchestrator | 2025-09-18 10:55:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:27.047034 | orchestrator | 2025-09-18 10:55:27 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:27.048589 | orchestrator | 2025-09-18 10:55:27 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:27.050642 | orchestrator | 2025-09-18 10:55:27 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:27.053755 | orchestrator | 2025-09-18 10:55:27 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:27.054135 | orchestrator | 2025-09-18 10:55:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:30.100047 | orchestrator | 2025-09-18 10:55:30 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:30.101096 | orchestrator | 2025-09-18 10:55:30 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:30.102603 | orchestrator | 2025-09-18 10:55:30 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:30.103985 | orchestrator | 2025-09-18 10:55:30 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:30.104007 | orchestrator | 2025-09-18 10:55:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:33.146350 | orchestrator | 2025-09-18 10:55:33 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:33.146984 | orchestrator | 2025-09-18 10:55:33 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:33.147922 | orchestrator | 2025-09-18 10:55:33 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:33.148637 | orchestrator | 2025-09-18 10:55:33 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:33.148659 | orchestrator | 2025-09-18 10:55:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:36.191643 | orchestrator | 2025-09-18 10:55:36 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:36.191927 | orchestrator | 2025-09-18 10:55:36 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:36.192718 | orchestrator | 2025-09-18 10:55:36 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:36.194077 | orchestrator | 2025-09-18 10:55:36 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:36.194096 | orchestrator | 2025-09-18 10:55:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:39.232716 | orchestrator | 2025-09-18 10:55:39 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:39.232874 | orchestrator | 2025-09-18 10:55:39 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:39.233941 | orchestrator | 2025-09-18 10:55:39 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:39.234898 | orchestrator | 2025-09-18 10:55:39 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:39.234920 | orchestrator | 2025-09-18 10:55:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:42.274396 | orchestrator | 2025-09-18 10:55:42 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state STARTED 2025-09-18 10:55:42.275234 | orchestrator | 2025-09-18 10:55:42 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state STARTED 2025-09-18 10:55:42.277197 | orchestrator | 2025-09-18 10:55:42 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:42.278181 | orchestrator | 2025-09-18 10:55:42 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:42.278423 | orchestrator | 2025-09-18 10:55:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:45.328912 | orchestrator | 2025-09-18 10:55:45 | INFO  | Task cb081d66-0589-4036-a735-8196864f3f66 is in state SUCCESS 2025-09-18 10:55:45.329357 | orchestrator | 2025-09-18 10:55:45 | INFO  | Task b1d0e8ae-3514-4762-80f4-91a1c03c9208 is in state SUCCESS 2025-09-18 10:55:45.329847 | orchestrator | 2025-09-18 10:55:45.329882 | orchestrator | 2025-09-18 10:55:45.329895 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:55:45.329907 | orchestrator | 2025-09-18 10:55:45.329918 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:55:45.329930 | orchestrator | Thursday 18 September 2025 10:55:09 +0000 (0:00:00.256) 0:00:00.256 **** 2025-09-18 10:55:45.329941 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:45.329954 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:45.329965 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:45.329976 | orchestrator | ok: [testbed-manager] 2025-09-18 10:55:45.329987 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:55:45.329998 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:55:45.330086 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:55:45.330115 | orchestrator | 2025-09-18 10:55:45.330136 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:55:45.330225 | orchestrator | Thursday 18 September 2025 10:55:10 +0000 (0:00:00.801) 0:00:01.058 **** 2025-09-18 10:55:45.330249 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330268 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330287 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330542 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330562 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330575 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330589 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-09-18 10:55:45.330601 | orchestrator | 2025-09-18 10:55:45.330614 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-09-18 10:55:45.330627 | orchestrator | 2025-09-18 10:55:45.330639 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-09-18 10:55:45.330651 | orchestrator | Thursday 18 September 2025 10:55:11 +0000 (0:00:00.734) 0:00:01.793 **** 2025-09-18 10:55:45.330665 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:55:45.330678 | orchestrator | 2025-09-18 10:55:45.330690 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-09-18 10:55:45.330717 | orchestrator | Thursday 18 September 2025 10:55:13 +0000 (0:00:01.573) 0:00:03.367 **** 2025-09-18 10:55:45.330730 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-09-18 10:55:45.330743 | orchestrator | 2025-09-18 10:55:45.330755 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-09-18 10:55:45.330767 | orchestrator | Thursday 18 September 2025 10:55:16 +0000 (0:00:03.749) 0:00:07.116 **** 2025-09-18 10:55:45.330780 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-09-18 10:55:45.330794 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-09-18 10:55:45.330806 | orchestrator | 2025-09-18 10:55:45.330817 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-09-18 10:55:45.330828 | orchestrator | Thursday 18 September 2025 10:55:23 +0000 (0:00:07.084) 0:00:14.201 **** 2025-09-18 10:55:45.330839 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:55:45.330850 | orchestrator | 2025-09-18 10:55:45.330861 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-09-18 10:55:45.330872 | orchestrator | Thursday 18 September 2025 10:55:27 +0000 (0:00:03.435) 0:00:17.637 **** 2025-09-18 10:55:45.330883 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:55:45.330894 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-09-18 10:55:45.330905 | orchestrator | 2025-09-18 10:55:45.330915 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-09-18 10:55:45.330926 | orchestrator | Thursday 18 September 2025 10:55:31 +0000 (0:00:03.974) 0:00:21.612 **** 2025-09-18 10:55:45.330941 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:55:45.330959 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-09-18 10:55:45.330978 | orchestrator | 2025-09-18 10:55:45.330997 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-09-18 10:55:45.331018 | orchestrator | Thursday 18 September 2025 10:55:38 +0000 (0:00:06.758) 0:00:28.371 **** 2025-09-18 10:55:45.331037 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-09-18 10:55:45.331058 | orchestrator | 2025-09-18 10:55:45.331077 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:55:45.331097 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331135 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331186 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331208 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331229 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331269 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331291 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 10:55:45.331311 | orchestrator | 2025-09-18 10:55:45.331330 | orchestrator | 2025-09-18 10:55:45.331349 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:55:45.331369 | orchestrator | Thursday 18 September 2025 10:55:44 +0000 (0:00:06.029) 0:00:34.400 **** 2025-09-18 10:55:45.331389 | orchestrator | =============================================================================== 2025-09-18 10:55:45.331409 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 7.08s 2025-09-18 10:55:45.331427 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.76s 2025-09-18 10:55:45.331446 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 6.03s 2025-09-18 10:55:45.331465 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.97s 2025-09-18 10:55:45.331484 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.75s 2025-09-18 10:55:45.331503 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.44s 2025-09-18 10:55:45.331522 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.57s 2025-09-18 10:55:45.332453 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.80s 2025-09-18 10:55:45.332477 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.73s 2025-09-18 10:55:45.332489 | orchestrator | 2025-09-18 10:55:45.332510 | orchestrator | 2025-09-18 10:55:45.332521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:55:45.332532 | orchestrator | 2025-09-18 10:55:45.332543 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:55:45.332554 | orchestrator | Thursday 18 September 2025 10:54:00 +0000 (0:00:00.233) 0:00:00.233 **** 2025-09-18 10:55:45.332565 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:45.332577 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:45.332588 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:45.332598 | orchestrator | 2025-09-18 10:55:45.332609 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:55:45.332621 | orchestrator | Thursday 18 September 2025 10:54:00 +0000 (0:00:00.261) 0:00:00.494 **** 2025-09-18 10:55:45.332642 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-09-18 10:55:45.332654 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-09-18 10:55:45.332666 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-09-18 10:55:45.332677 | orchestrator | 2025-09-18 10:55:45.332688 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-09-18 10:55:45.332699 | orchestrator | 2025-09-18 10:55:45.332711 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 10:55:45.332729 | orchestrator | Thursday 18 September 2025 10:54:00 +0000 (0:00:00.346) 0:00:00.840 **** 2025-09-18 10:55:45.332747 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:55:45.332877 | orchestrator | 2025-09-18 10:55:45.332896 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-09-18 10:55:45.332907 | orchestrator | Thursday 18 September 2025 10:54:01 +0000 (0:00:00.512) 0:00:01.352 **** 2025-09-18 10:55:45.332918 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-09-18 10:55:45.332929 | orchestrator | 2025-09-18 10:55:45.332940 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-09-18 10:55:45.332951 | orchestrator | Thursday 18 September 2025 10:54:04 +0000 (0:00:03.425) 0:00:04.778 **** 2025-09-18 10:55:45.332962 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-09-18 10:55:45.332973 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-09-18 10:55:45.332984 | orchestrator | 2025-09-18 10:55:45.332995 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-09-18 10:55:45.333006 | orchestrator | Thursday 18 September 2025 10:54:11 +0000 (0:00:07.288) 0:00:12.067 **** 2025-09-18 10:55:45.333017 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:55:45.333028 | orchestrator | 2025-09-18 10:55:45.333039 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-09-18 10:55:45.333049 | orchestrator | Thursday 18 September 2025 10:54:15 +0000 (0:00:03.282) 0:00:15.349 **** 2025-09-18 10:55:45.333060 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:55:45.333071 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-09-18 10:55:45.333082 | orchestrator | 2025-09-18 10:55:45.333093 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-09-18 10:55:45.333104 | orchestrator | Thursday 18 September 2025 10:54:19 +0000 (0:00:03.960) 0:00:19.310 **** 2025-09-18 10:55:45.333115 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:55:45.333126 | orchestrator | 2025-09-18 10:55:45.333137 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-09-18 10:55:45.333147 | orchestrator | Thursday 18 September 2025 10:54:22 +0000 (0:00:03.447) 0:00:22.757 **** 2025-09-18 10:55:45.333222 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-09-18 10:55:45.333234 | orchestrator | 2025-09-18 10:55:45.333243 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-09-18 10:55:45.333253 | orchestrator | Thursday 18 September 2025 10:54:27 +0000 (0:00:04.584) 0:00:27.342 **** 2025-09-18 10:55:45.333263 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.333273 | orchestrator | 2025-09-18 10:55:45.333283 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-09-18 10:55:45.333292 | orchestrator | Thursday 18 September 2025 10:54:30 +0000 (0:00:03.474) 0:00:30.816 **** 2025-09-18 10:55:45.333302 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.333311 | orchestrator | 2025-09-18 10:55:45.333321 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-09-18 10:55:45.333331 | orchestrator | Thursday 18 September 2025 10:54:34 +0000 (0:00:03.905) 0:00:34.721 **** 2025-09-18 10:55:45.333341 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.333351 | orchestrator | 2025-09-18 10:55:45.333361 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-09-18 10:55:45.333370 | orchestrator | Thursday 18 September 2025 10:54:38 +0000 (0:00:03.832) 0:00:38.554 **** 2025-09-18 10:55:45.333396 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333439 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333462 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333478 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333497 | orchestrator | 2025-09-18 10:55:45.333507 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-09-18 10:55:45.333518 | orchestrator | Thursday 18 September 2025 10:54:39 +0000 (0:00:01.292) 0:00:39.846 **** 2025-09-18 10:55:45.333529 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.333540 | orchestrator | 2025-09-18 10:55:45.333551 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-09-18 10:55:45.333561 | orchestrator | Thursday 18 September 2025 10:54:39 +0000 (0:00:00.103) 0:00:39.950 **** 2025-09-18 10:55:45.333572 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.333587 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:45.333599 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:45.333609 | orchestrator | 2025-09-18 10:55:45.333620 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-09-18 10:55:45.333630 | orchestrator | Thursday 18 September 2025 10:54:40 +0000 (0:00:00.441) 0:00:40.392 **** 2025-09-18 10:55:45.333641 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 10:55:45.333652 | orchestrator | 2025-09-18 10:55:45.333662 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-09-18 10:55:45.333673 | orchestrator | Thursday 18 September 2025 10:54:40 +0000 (0:00:00.810) 0:00:41.202 **** 2025-09-18 10:55:45.333685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333697 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333709 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333737 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333754 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333766 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333777 | orchestrator | 2025-09-18 10:55:45.333788 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-09-18 10:55:45.333799 | orchestrator | Thursday 18 September 2025 10:54:43 +0000 (0:00:02.171) 0:00:43.374 **** 2025-09-18 10:55:45.333810 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:55:45.333821 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:55:45.333831 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:55:45.333842 | orchestrator | 2025-09-18 10:55:45.333853 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 10:55:45.333864 | orchestrator | Thursday 18 September 2025 10:54:43 +0000 (0:00:00.301) 0:00:43.675 **** 2025-09-18 10:55:45.333874 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:55:45.333884 | orchestrator | 2025-09-18 10:55:45.333893 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-09-18 10:55:45.333903 | orchestrator | Thursday 18 September 2025 10:54:44 +0000 (0:00:00.763) 0:00:44.439 **** 2025-09-18 10:55:45.333913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333935 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.333961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333971 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.333997 | orchestrator | 2025-09-18 10:55:45.334008 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-09-18 10:55:45.334048 | orchestrator | Thursday 18 September 2025 10:54:46 +0000 (0:00:02.116) 0:00:46.556 **** 2025-09-18 10:55:45.334068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334100 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.334111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334138 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:45.334149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334214 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:45.334230 | orchestrator | 2025-09-18 10:55:45.334241 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-09-18 10:55:45.334251 | orchestrator | Thursday 18 September 2025 10:54:46 +0000 (0:00:00.614) 0:00:47.171 **** 2025-09-18 10:55:45.334267 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334288 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.334309 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334319 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334329 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:45.334352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334373 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:45.334383 | orchestrator | 2025-09-18 10:55:45.334393 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-09-18 10:55:45.334403 | orchestrator | Thursday 18 September 2025 10:54:47 +0000 (0:00:00.989) 0:00:48.161 **** 2025-09-18 10:55:45.334413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334430 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334446 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334461 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334472 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334482 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334498 | orchestrator | 2025-09-18 10:55:45.334508 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-09-18 10:55:45.334518 | orchestrator | Thursday 18 September 2025 10:54:50 +0000 (0:00:02.245) 0:00:50.406 **** 2025-09-18 10:55:45.334528 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334570 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334586 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334597 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334607 | orchestrator | 2025-09-18 10:55:45.334617 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-09-18 10:55:45.334627 | orchestrator | Thursday 18 September 2025 10:54:55 +0000 (0:00:04.895) 0:00:55.302 **** 2025-09-18 10:55:45.334643 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334658 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334668 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.334679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334696 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334706 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:45.334716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-09-18 10:55:45.334732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-09-18 10:55:45.334742 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:45.334752 | orchestrator | 2025-09-18 10:55:45.334762 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-09-18 10:55:45.334772 | orchestrator | Thursday 18 September 2025 10:54:55 +0000 (0:00:00.859) 0:00:56.161 **** 2025-09-18 10:55:45.334787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334803 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334814 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-api:2024.2', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-09-18 10:55:45.334824 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334841 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/magnum-conductor:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 10:55:45.334875 | orchestrator | 2025-09-18 10:55:45.334885 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-09-18 10:55:45.334895 | orchestrator | Thursday 18 September 2025 10:54:58 +0000 (0:00:02.809) 0:00:58.971 **** 2025-09-18 10:55:45.334904 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:55:45.334914 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:55:45.334924 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:55:45.334934 | orchestrator | 2025-09-18 10:55:45.334944 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-09-18 10:55:45.334953 | orchestrator | Thursday 18 September 2025 10:54:59 +0000 (0:00:00.325) 0:00:59.297 **** 2025-09-18 10:55:45.334963 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.334973 | orchestrator | 2025-09-18 10:55:45.334982 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-09-18 10:55:45.334992 | orchestrator | Thursday 18 September 2025 10:55:00 +0000 (0:00:01.860) 0:01:01.158 **** 2025-09-18 10:55:45.335002 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.335011 | orchestrator | 2025-09-18 10:55:45.335021 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-09-18 10:55:45.335031 | orchestrator | Thursday 18 September 2025 10:55:03 +0000 (0:00:02.205) 0:01:03.363 **** 2025-09-18 10:55:45.335040 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.335050 | orchestrator | 2025-09-18 10:55:45.335059 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 10:55:45.335069 | orchestrator | Thursday 18 September 2025 10:55:18 +0000 (0:00:15.708) 0:01:19.071 **** 2025-09-18 10:55:45.335079 | orchestrator | 2025-09-18 10:55:45.335089 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 10:55:45.335099 | orchestrator | Thursday 18 September 2025 10:55:18 +0000 (0:00:00.066) 0:01:19.138 **** 2025-09-18 10:55:45.335108 | orchestrator | 2025-09-18 10:55:45.335118 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-09-18 10:55:45.335128 | orchestrator | Thursday 18 September 2025 10:55:18 +0000 (0:00:00.072) 0:01:19.210 **** 2025-09-18 10:55:45.335137 | orchestrator | 2025-09-18 10:55:45.335147 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-09-18 10:55:45.335212 | orchestrator | Thursday 18 September 2025 10:55:19 +0000 (0:00:00.072) 0:01:19.283 **** 2025-09-18 10:55:45.335222 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.335232 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:45.335242 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:45.335251 | orchestrator | 2025-09-18 10:55:45.335261 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-09-18 10:55:45.335271 | orchestrator | Thursday 18 September 2025 10:55:32 +0000 (0:00:13.905) 0:01:33.189 **** 2025-09-18 10:55:45.335280 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:55:45.335290 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:55:45.335300 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:55:45.335309 | orchestrator | 2025-09-18 10:55:45.335319 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:55:45.335329 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-09-18 10:55:45.335339 | orchestrator | testbed-node-1 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:55:45.335349 | orchestrator | testbed-node-2 : ok=13  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 10:55:45.335365 | orchestrator | 2025-09-18 10:55:45.335375 | orchestrator | 2025-09-18 10:55:45.335385 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:55:45.335395 | orchestrator | Thursday 18 September 2025 10:55:42 +0000 (0:00:09.956) 0:01:43.145 **** 2025-09-18 10:55:45.335404 | orchestrator | =============================================================================== 2025-09-18 10:55:45.335414 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 15.71s 2025-09-18 10:55:45.335430 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.91s 2025-09-18 10:55:45.335440 | orchestrator | magnum : Restart magnum-conductor container ----------------------------- 9.96s 2025-09-18 10:55:45.335450 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 7.29s 2025-09-18 10:55:45.335459 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 4.90s 2025-09-18 10:55:45.335469 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.58s 2025-09-18 10:55:45.335479 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.96s 2025-09-18 10:55:45.335489 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 3.91s 2025-09-18 10:55:45.335504 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.83s 2025-09-18 10:55:45.335514 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.47s 2025-09-18 10:55:45.335524 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.45s 2025-09-18 10:55:45.335533 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.43s 2025-09-18 10:55:45.335543 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.28s 2025-09-18 10:55:45.335553 | orchestrator | magnum : Check magnum containers ---------------------------------------- 2.81s 2025-09-18 10:55:45.335562 | orchestrator | magnum : Copying over config.json files for services -------------------- 2.25s 2025-09-18 10:55:45.335572 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.21s 2025-09-18 10:55:45.335582 | orchestrator | magnum : Copying over kubeconfig file ----------------------------------- 2.17s 2025-09-18 10:55:45.335591 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.12s 2025-09-18 10:55:45.335601 | orchestrator | magnum : Creating Magnum database --------------------------------------- 1.86s 2025-09-18 10:55:45.335610 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 1.29s 2025-09-18 10:55:45.335620 | orchestrator | 2025-09-18 10:55:45 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:45.335630 | orchestrator | 2025-09-18 10:55:45 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:55:45.335640 | orchestrator | 2025-09-18 10:55:45 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:45.335650 | orchestrator | 2025-09-18 10:55:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:48.372467 | orchestrator | 2025-09-18 10:55:48 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:55:48.372893 | orchestrator | 2025-09-18 10:55:48 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:48.373732 | orchestrator | 2025-09-18 10:55:48 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:55:48.374530 | orchestrator | 2025-09-18 10:55:48 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:48.374734 | orchestrator | 2025-09-18 10:55:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:51.401366 | orchestrator | 2025-09-18 10:55:51 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:55:51.403208 | orchestrator | 2025-09-18 10:55:51 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:51.403598 | orchestrator | 2025-09-18 10:55:51 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:55:51.404098 | orchestrator | 2025-09-18 10:55:51 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:51.404123 | orchestrator | 2025-09-18 10:55:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:54.444554 | orchestrator | 2025-09-18 10:55:54 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:55:54.445632 | orchestrator | 2025-09-18 10:55:54 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:54.447655 | orchestrator | 2025-09-18 10:55:54 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:55:54.449410 | orchestrator | 2025-09-18 10:55:54 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:54.449687 | orchestrator | 2025-09-18 10:55:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:55:57.486908 | orchestrator | 2025-09-18 10:55:57 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:55:57.487502 | orchestrator | 2025-09-18 10:55:57 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:55:57.488344 | orchestrator | 2025-09-18 10:55:57 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:55:57.489130 | orchestrator | 2025-09-18 10:55:57 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:55:57.489200 | orchestrator | 2025-09-18 10:55:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:00.518995 | orchestrator | 2025-09-18 10:56:00 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:00.519988 | orchestrator | 2025-09-18 10:56:00 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:00.521073 | orchestrator | 2025-09-18 10:56:00 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:00.522547 | orchestrator | 2025-09-18 10:56:00 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:00.522572 | orchestrator | 2025-09-18 10:56:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:03.565321 | orchestrator | 2025-09-18 10:56:03 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:03.565837 | orchestrator | 2025-09-18 10:56:03 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:03.567447 | orchestrator | 2025-09-18 10:56:03 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:03.568285 | orchestrator | 2025-09-18 10:56:03 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:03.568698 | orchestrator | 2025-09-18 10:56:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:06.616282 | orchestrator | 2025-09-18 10:56:06 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:06.618362 | orchestrator | 2025-09-18 10:56:06 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:06.619756 | orchestrator | 2025-09-18 10:56:06 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:06.620940 | orchestrator | 2025-09-18 10:56:06 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:06.621721 | orchestrator | 2025-09-18 10:56:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:09.660724 | orchestrator | 2025-09-18 10:56:09 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:09.661670 | orchestrator | 2025-09-18 10:56:09 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:09.663790 | orchestrator | 2025-09-18 10:56:09 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:09.665564 | orchestrator | 2025-09-18 10:56:09 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:09.665800 | orchestrator | 2025-09-18 10:56:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:12.721965 | orchestrator | 2025-09-18 10:56:12 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:12.723637 | orchestrator | 2025-09-18 10:56:12 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:12.727003 | orchestrator | 2025-09-18 10:56:12 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:12.729762 | orchestrator | 2025-09-18 10:56:12 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:12.730082 | orchestrator | 2025-09-18 10:56:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:15.770990 | orchestrator | 2025-09-18 10:56:15 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:15.771505 | orchestrator | 2025-09-18 10:56:15 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:15.772264 | orchestrator | 2025-09-18 10:56:15 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:15.773420 | orchestrator | 2025-09-18 10:56:15 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:15.773516 | orchestrator | 2025-09-18 10:56:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:18.807013 | orchestrator | 2025-09-18 10:56:18 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:18.807713 | orchestrator | 2025-09-18 10:56:18 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:18.808735 | orchestrator | 2025-09-18 10:56:18 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:18.809788 | orchestrator | 2025-09-18 10:56:18 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:18.809817 | orchestrator | 2025-09-18 10:56:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:21.843879 | orchestrator | 2025-09-18 10:56:21 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:21.844600 | orchestrator | 2025-09-18 10:56:21 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:21.845505 | orchestrator | 2025-09-18 10:56:21 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:21.846506 | orchestrator | 2025-09-18 10:56:21 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:21.846664 | orchestrator | 2025-09-18 10:56:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:24.883092 | orchestrator | 2025-09-18 10:56:24 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:24.883936 | orchestrator | 2025-09-18 10:56:24 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:24.885021 | orchestrator | 2025-09-18 10:56:24 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:24.885958 | orchestrator | 2025-09-18 10:56:24 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:24.885985 | orchestrator | 2025-09-18 10:56:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:27.915942 | orchestrator | 2025-09-18 10:56:27 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:27.916490 | orchestrator | 2025-09-18 10:56:27 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:27.917609 | orchestrator | 2025-09-18 10:56:27 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:27.918527 | orchestrator | 2025-09-18 10:56:27 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:27.918734 | orchestrator | 2025-09-18 10:56:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:30.950591 | orchestrator | 2025-09-18 10:56:30 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:30.952391 | orchestrator | 2025-09-18 10:56:30 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:30.953404 | orchestrator | 2025-09-18 10:56:30 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:30.954687 | orchestrator | 2025-09-18 10:56:30 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:30.954966 | orchestrator | 2025-09-18 10:56:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:33.987529 | orchestrator | 2025-09-18 10:56:33 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:33.987630 | orchestrator | 2025-09-18 10:56:33 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:33.988352 | orchestrator | 2025-09-18 10:56:33 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:33.991023 | orchestrator | 2025-09-18 10:56:33 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:33.991142 | orchestrator | 2025-09-18 10:56:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:37.027467 | orchestrator | 2025-09-18 10:56:37 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:37.028363 | orchestrator | 2025-09-18 10:56:37 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:37.031450 | orchestrator | 2025-09-18 10:56:37 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:37.031478 | orchestrator | 2025-09-18 10:56:37 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:37.031490 | orchestrator | 2025-09-18 10:56:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:40.073685 | orchestrator | 2025-09-18 10:56:40 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:40.074480 | orchestrator | 2025-09-18 10:56:40 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:40.075546 | orchestrator | 2025-09-18 10:56:40 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:40.076648 | orchestrator | 2025-09-18 10:56:40 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:40.076669 | orchestrator | 2025-09-18 10:56:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:43.124059 | orchestrator | 2025-09-18 10:56:43 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:43.126286 | orchestrator | 2025-09-18 10:56:43 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:43.128386 | orchestrator | 2025-09-18 10:56:43 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:43.130312 | orchestrator | 2025-09-18 10:56:43 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:43.130352 | orchestrator | 2025-09-18 10:56:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:46.160914 | orchestrator | 2025-09-18 10:56:46 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:46.163643 | orchestrator | 2025-09-18 10:56:46 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:46.165462 | orchestrator | 2025-09-18 10:56:46 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:46.168143 | orchestrator | 2025-09-18 10:56:46 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:46.168498 | orchestrator | 2025-09-18 10:56:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:49.202324 | orchestrator | 2025-09-18 10:56:49 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:49.203801 | orchestrator | 2025-09-18 10:56:49 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:49.205669 | orchestrator | 2025-09-18 10:56:49 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:49.207215 | orchestrator | 2025-09-18 10:56:49 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:49.207425 | orchestrator | 2025-09-18 10:56:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:52.234553 | orchestrator | 2025-09-18 10:56:52 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:52.234702 | orchestrator | 2025-09-18 10:56:52 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:52.235731 | orchestrator | 2025-09-18 10:56:52 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:52.236799 | orchestrator | 2025-09-18 10:56:52 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:52.236898 | orchestrator | 2025-09-18 10:56:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:55.269181 | orchestrator | 2025-09-18 10:56:55 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:55.273512 | orchestrator | 2025-09-18 10:56:55 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:55.277530 | orchestrator | 2025-09-18 10:56:55 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:55.278238 | orchestrator | 2025-09-18 10:56:55 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:55.279437 | orchestrator | 2025-09-18 10:56:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:56:58.315696 | orchestrator | 2025-09-18 10:56:58 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:56:58.317090 | orchestrator | 2025-09-18 10:56:58 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:56:58.318579 | orchestrator | 2025-09-18 10:56:58 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:56:58.319773 | orchestrator | 2025-09-18 10:56:58 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:56:58.319931 | orchestrator | 2025-09-18 10:56:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:01.352615 | orchestrator | 2025-09-18 10:57:01 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:01.353400 | orchestrator | 2025-09-18 10:57:01 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:01.354823 | orchestrator | 2025-09-18 10:57:01 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:01.355616 | orchestrator | 2025-09-18 10:57:01 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:01.355637 | orchestrator | 2025-09-18 10:57:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:04.389073 | orchestrator | 2025-09-18 10:57:04 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:04.390467 | orchestrator | 2025-09-18 10:57:04 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:04.391980 | orchestrator | 2025-09-18 10:57:04 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:04.393817 | orchestrator | 2025-09-18 10:57:04 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:04.393841 | orchestrator | 2025-09-18 10:57:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:07.431722 | orchestrator | 2025-09-18 10:57:07 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:07.433636 | orchestrator | 2025-09-18 10:57:07 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:07.435503 | orchestrator | 2025-09-18 10:57:07 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:07.437659 | orchestrator | 2025-09-18 10:57:07 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:07.437682 | orchestrator | 2025-09-18 10:57:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:10.490822 | orchestrator | 2025-09-18 10:57:10 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:10.494587 | orchestrator | 2025-09-18 10:57:10 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:10.497474 | orchestrator | 2025-09-18 10:57:10 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:10.499339 | orchestrator | 2025-09-18 10:57:10 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:10.500006 | orchestrator | 2025-09-18 10:57:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:13.560901 | orchestrator | 2025-09-18 10:57:13 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:13.563705 | orchestrator | 2025-09-18 10:57:13 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:13.565992 | orchestrator | 2025-09-18 10:57:13 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:13.567828 | orchestrator | 2025-09-18 10:57:13 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:13.568136 | orchestrator | 2025-09-18 10:57:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:16.606541 | orchestrator | 2025-09-18 10:57:16 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:16.606609 | orchestrator | 2025-09-18 10:57:16 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:16.608275 | orchestrator | 2025-09-18 10:57:16 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:16.609314 | orchestrator | 2025-09-18 10:57:16 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:16.609344 | orchestrator | 2025-09-18 10:57:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:19.653494 | orchestrator | 2025-09-18 10:57:19 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:19.655478 | orchestrator | 2025-09-18 10:57:19 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:19.657716 | orchestrator | 2025-09-18 10:57:19 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:19.659982 | orchestrator | 2025-09-18 10:57:19 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:19.660454 | orchestrator | 2025-09-18 10:57:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:22.716411 | orchestrator | 2025-09-18 10:57:22 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:22.718123 | orchestrator | 2025-09-18 10:57:22 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:22.720441 | orchestrator | 2025-09-18 10:57:22 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:22.722146 | orchestrator | 2025-09-18 10:57:22 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:22.722425 | orchestrator | 2025-09-18 10:57:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:25.753706 | orchestrator | 2025-09-18 10:57:25 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:25.753924 | orchestrator | 2025-09-18 10:57:25 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:25.756573 | orchestrator | 2025-09-18 10:57:25 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:25.756612 | orchestrator | 2025-09-18 10:57:25 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:25.756676 | orchestrator | 2025-09-18 10:57:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:28.788667 | orchestrator | 2025-09-18 10:57:28 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:28.788844 | orchestrator | 2025-09-18 10:57:28 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:28.790546 | orchestrator | 2025-09-18 10:57:28 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:28.791435 | orchestrator | 2025-09-18 10:57:28 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:28.791542 | orchestrator | 2025-09-18 10:57:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:31.831197 | orchestrator | 2025-09-18 10:57:31 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:31.834325 | orchestrator | 2025-09-18 10:57:31 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:31.838801 | orchestrator | 2025-09-18 10:57:31 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:31.841636 | orchestrator | 2025-09-18 10:57:31 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:31.841984 | orchestrator | 2025-09-18 10:57:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:34.883735 | orchestrator | 2025-09-18 10:57:34 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:34.886398 | orchestrator | 2025-09-18 10:57:34 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:34.888617 | orchestrator | 2025-09-18 10:57:34 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:34.891095 | orchestrator | 2025-09-18 10:57:34 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:34.891628 | orchestrator | 2025-09-18 10:57:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:37.943743 | orchestrator | 2025-09-18 10:57:37 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:37.946147 | orchestrator | 2025-09-18 10:57:37 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:37.948449 | orchestrator | 2025-09-18 10:57:37 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:37.949683 | orchestrator | 2025-09-18 10:57:37 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:37.949826 | orchestrator | 2025-09-18 10:57:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:41.008547 | orchestrator | 2025-09-18 10:57:41 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:41.010781 | orchestrator | 2025-09-18 10:57:41 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:41.013228 | orchestrator | 2025-09-18 10:57:41 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:41.014419 | orchestrator | 2025-09-18 10:57:41 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:41.014441 | orchestrator | 2025-09-18 10:57:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:44.062302 | orchestrator | 2025-09-18 10:57:44 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:44.063626 | orchestrator | 2025-09-18 10:57:44 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state STARTED 2025-09-18 10:57:44.065080 | orchestrator | 2025-09-18 10:57:44 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:44.065828 | orchestrator | 2025-09-18 10:57:44 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:44.065872 | orchestrator | 2025-09-18 10:57:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:47.111145 | orchestrator | 2025-09-18 10:57:47 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:47.117304 | orchestrator | 2025-09-18 10:57:47 | INFO  | Task a77f0e3b-ebdc-41c0-b717-b4b3be8441f8 is in state SUCCESS 2025-09-18 10:57:47.119810 | orchestrator | 2025-09-18 10:57:47.119841 | orchestrator | 2025-09-18 10:57:47.119854 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:57:47.119866 | orchestrator | 2025-09-18 10:57:47.119878 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:57:47.119890 | orchestrator | Thursday 18 September 2025 10:55:10 +0000 (0:00:00.276) 0:00:00.276 **** 2025-09-18 10:57:47.119901 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:57:47.119914 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:57:47.119924 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:57:47.119935 | orchestrator | 2025-09-18 10:57:47.119947 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:57:47.119974 | orchestrator | Thursday 18 September 2025 10:55:10 +0000 (0:00:00.333) 0:00:00.610 **** 2025-09-18 10:57:47.119986 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-09-18 10:57:47.120025 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-09-18 10:57:47.120038 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-09-18 10:57:47.120073 | orchestrator | 2025-09-18 10:57:47.120084 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-09-18 10:57:47.120095 | orchestrator | 2025-09-18 10:57:47.120114 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 10:57:47.120125 | orchestrator | Thursday 18 September 2025 10:55:11 +0000 (0:00:00.464) 0:00:01.075 **** 2025-09-18 10:57:47.120137 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:57:47.120148 | orchestrator | 2025-09-18 10:57:47.120160 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-09-18 10:57:47.120171 | orchestrator | Thursday 18 September 2025 10:55:11 +0000 (0:00:00.544) 0:00:01.619 **** 2025-09-18 10:57:47.120182 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-09-18 10:57:47.120192 | orchestrator | 2025-09-18 10:57:47.120204 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-09-18 10:57:47.120215 | orchestrator | Thursday 18 September 2025 10:55:15 +0000 (0:00:03.856) 0:00:05.476 **** 2025-09-18 10:57:47.120226 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-09-18 10:57:47.120238 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-09-18 10:57:47.120248 | orchestrator | 2025-09-18 10:57:47.120259 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-09-18 10:57:47.120270 | orchestrator | Thursday 18 September 2025 10:55:22 +0000 (0:00:06.901) 0:00:12.377 **** 2025-09-18 10:57:47.120281 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:57:47.120294 | orchestrator | 2025-09-18 10:57:47.120304 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-09-18 10:57:47.120317 | orchestrator | Thursday 18 September 2025 10:55:26 +0000 (0:00:03.455) 0:00:15.833 **** 2025-09-18 10:57:47.120328 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:57:47.120339 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-09-18 10:57:47.120350 | orchestrator | 2025-09-18 10:57:47.120361 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-09-18 10:57:47.120372 | orchestrator | Thursday 18 September 2025 10:55:30 +0000 (0:00:04.086) 0:00:19.920 **** 2025-09-18 10:57:47.120384 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:57:47.120395 | orchestrator | 2025-09-18 10:57:47.120406 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-09-18 10:57:47.120419 | orchestrator | Thursday 18 September 2025 10:55:33 +0000 (0:00:03.398) 0:00:23.318 **** 2025-09-18 10:57:47.120432 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-09-18 10:57:47.120444 | orchestrator | 2025-09-18 10:57:47.120456 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-09-18 10:57:47.120468 | orchestrator | Thursday 18 September 2025 10:55:38 +0000 (0:00:04.396) 0:00:27.715 **** 2025-09-18 10:57:47.120500 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.120534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.120549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.120568 | orchestrator | 2025-09-18 10:57:47.120580 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 10:57:47.120592 | orchestrator | Thursday 18 September 2025 10:55:41 +0000 (0:00:03.278) 0:00:30.993 **** 2025-09-18 10:57:47.120603 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:57:47.120615 | orchestrator | 2025-09-18 10:57:47.120634 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-09-18 10:57:47.120646 | orchestrator | Thursday 18 September 2025 10:55:42 +0000 (0:00:00.692) 0:00:31.686 **** 2025-09-18 10:57:47.120657 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:57:47.120668 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:57:47.120680 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.120691 | orchestrator | 2025-09-18 10:57:47.120702 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-09-18 10:57:47.120713 | orchestrator | Thursday 18 September 2025 10:55:46 +0000 (0:00:04.503) 0:00:36.190 **** 2025-09-18 10:57:47.120729 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120740 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120752 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120763 | orchestrator | 2025-09-18 10:57:47.120774 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-09-18 10:57:47.120785 | orchestrator | Thursday 18 September 2025 10:55:48 +0000 (0:00:01.576) 0:00:37.766 **** 2025-09-18 10:57:47.120796 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120808 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120819 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:57:47.120830 | orchestrator | 2025-09-18 10:57:47.120841 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-09-18 10:57:47.120852 | orchestrator | Thursday 18 September 2025 10:55:49 +0000 (0:00:01.179) 0:00:38.945 **** 2025-09-18 10:57:47.120863 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:57:47.120874 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:57:47.120885 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:57:47.120897 | orchestrator | 2025-09-18 10:57:47.120908 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-09-18 10:57:47.120919 | orchestrator | Thursday 18 September 2025 10:55:50 +0000 (0:00:00.748) 0:00:39.694 **** 2025-09-18 10:57:47.120930 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.120941 | orchestrator | 2025-09-18 10:57:47.120952 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-09-18 10:57:47.120963 | orchestrator | Thursday 18 September 2025 10:55:50 +0000 (0:00:00.266) 0:00:39.961 **** 2025-09-18 10:57:47.120974 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.120986 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121015 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121027 | orchestrator | 2025-09-18 10:57:47.121038 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 10:57:47.121048 | orchestrator | Thursday 18 September 2025 10:55:50 +0000 (0:00:00.265) 0:00:40.226 **** 2025-09-18 10:57:47.121059 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 10:57:47.121070 | orchestrator | 2025-09-18 10:57:47.121081 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-09-18 10:57:47.121099 | orchestrator | Thursday 18 September 2025 10:55:51 +0000 (0:00:00.490) 0:00:40.717 **** 2025-09-18 10:57:47.121117 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121136 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121150 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121168 | orchestrator | 2025-09-18 10:57:47.121179 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-09-18 10:57:47.121190 | orchestrator | Thursday 18 September 2025 10:55:54 +0000 (0:00:03.647) 0:00:44.365 **** 2025-09-18 10:57:47.121214 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121228 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121240 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121260 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121291 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121302 | orchestrator | 2025-09-18 10:57:47.121313 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-09-18 10:57:47.121324 | orchestrator | Thursday 18 September 2025 10:55:57 +0000 (0:00:03.144) 0:00:47.509 **** 2025-09-18 10:57:47.121367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121388 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121420 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121437 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-09-18 10:57:47.121462 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121473 | orchestrator | 2025-09-18 10:57:47.121484 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-09-18 10:57:47.121495 | orchestrator | Thursday 18 September 2025 10:56:01 +0000 (0:00:03.156) 0:00:50.666 **** 2025-09-18 10:57:47.121505 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121516 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121527 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121538 | orchestrator | 2025-09-18 10:57:47.121549 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-09-18 10:57:47.121559 | orchestrator | Thursday 18 September 2025 10:56:04 +0000 (0:00:03.659) 0:00:54.326 **** 2025-09-18 10:57:47.121577 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121595 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121614 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.121626 | orchestrator | 2025-09-18 10:57:47.121637 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-09-18 10:57:47.121648 | orchestrator | Thursday 18 September 2025 10:56:08 +0000 (0:00:04.135) 0:00:58.462 **** 2025-09-18 10:57:47.121659 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:57:47.121670 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:57:47.121680 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.121691 | orchestrator | 2025-09-18 10:57:47.121702 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-09-18 10:57:47.121712 | orchestrator | Thursday 18 September 2025 10:56:14 +0000 (0:00:05.203) 0:01:03.665 **** 2025-09-18 10:57:47.121723 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121734 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121745 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121755 | orchestrator | 2025-09-18 10:57:47.121766 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-09-18 10:57:47.121784 | orchestrator | Thursday 18 September 2025 10:56:17 +0000 (0:00:03.257) 0:01:06.922 **** 2025-09-18 10:57:47.121795 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121806 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121817 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121827 | orchestrator | 2025-09-18 10:57:47.121838 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-09-18 10:57:47.121849 | orchestrator | Thursday 18 September 2025 10:56:21 +0000 (0:00:03.996) 0:01:10.919 **** 2025-09-18 10:57:47.121860 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121870 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121881 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121892 | orchestrator | 2025-09-18 10:57:47.121908 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-09-18 10:57:47.121919 | orchestrator | Thursday 18 September 2025 10:56:24 +0000 (0:00:03.654) 0:01:14.573 **** 2025-09-18 10:57:47.121935 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.121946 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.121957 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.121968 | orchestrator | 2025-09-18 10:57:47.121978 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-09-18 10:57:47.121989 | orchestrator | Thursday 18 September 2025 10:56:28 +0000 (0:00:03.757) 0:01:18.331 **** 2025-09-18 10:57:47.122103 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.122120 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.122131 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.122142 | orchestrator | 2025-09-18 10:57:47.122153 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-09-18 10:57:47.122164 | orchestrator | Thursday 18 September 2025 10:56:29 +0000 (0:00:00.324) 0:01:18.656 **** 2025-09-18 10:57:47.122175 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 10:57:47.122186 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.122197 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 10:57:47.122208 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.122219 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-09-18 10:57:47.122230 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.122241 | orchestrator | 2025-09-18 10:57:47.122252 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-09-18 10:57:47.122263 | orchestrator | Thursday 18 September 2025 10:56:32 +0000 (0:00:03.706) 0:01:22.363 **** 2025-09-18 10:57:47.122275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.122305 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.122327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/glance-api:2024.2', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', '', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-09-18 10:57:47.122339 | orchestrator | 2025-09-18 10:57:47.122350 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-09-18 10:57:47.122361 | orchestrator | Thursday 18 September 2025 10:56:36 +0000 (0:00:03.786) 0:01:26.149 **** 2025-09-18 10:57:47.122372 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:57:47.122382 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:57:47.122393 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:57:47.122404 | orchestrator | 2025-09-18 10:57:47.122414 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-09-18 10:57:47.122425 | orchestrator | Thursday 18 September 2025 10:56:36 +0000 (0:00:00.263) 0:01:26.413 **** 2025-09-18 10:57:47.122436 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122447 | orchestrator | 2025-09-18 10:57:47.122457 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-09-18 10:57:47.122468 | orchestrator | Thursday 18 September 2025 10:56:38 +0000 (0:00:02.190) 0:01:28.603 **** 2025-09-18 10:57:47.122479 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122489 | orchestrator | 2025-09-18 10:57:47.122500 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-09-18 10:57:47.122517 | orchestrator | Thursday 18 September 2025 10:56:41 +0000 (0:00:02.367) 0:01:30.971 **** 2025-09-18 10:57:47.122536 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122547 | orchestrator | 2025-09-18 10:57:47.122558 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-09-18 10:57:47.122569 | orchestrator | Thursday 18 September 2025 10:56:43 +0000 (0:00:02.089) 0:01:33.060 **** 2025-09-18 10:57:47.122580 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122591 | orchestrator | 2025-09-18 10:57:47.122602 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-09-18 10:57:47.122613 | orchestrator | Thursday 18 September 2025 10:57:09 +0000 (0:00:25.683) 0:01:58.743 **** 2025-09-18 10:57:47.122624 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122635 | orchestrator | 2025-09-18 10:57:47.122653 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 10:57:47.122664 | orchestrator | Thursday 18 September 2025 10:57:11 +0000 (0:00:02.214) 0:02:00.958 **** 2025-09-18 10:57:47.122676 | orchestrator | 2025-09-18 10:57:47.122687 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 10:57:47.122698 | orchestrator | Thursday 18 September 2025 10:57:11 +0000 (0:00:00.063) 0:02:01.022 **** 2025-09-18 10:57:47.122709 | orchestrator | 2025-09-18 10:57:47.122720 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-09-18 10:57:47.122731 | orchestrator | Thursday 18 September 2025 10:57:11 +0000 (0:00:00.073) 0:02:01.095 **** 2025-09-18 10:57:47.122742 | orchestrator | 2025-09-18 10:57:47.122758 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-09-18 10:57:47.122770 | orchestrator | Thursday 18 September 2025 10:57:11 +0000 (0:00:00.068) 0:02:01.163 **** 2025-09-18 10:57:47.122781 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:57:47.122792 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:57:47.122803 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:57:47.122814 | orchestrator | 2025-09-18 10:57:47.122826 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:57:47.122838 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-09-18 10:57:47.122851 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:57:47.122862 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:57:47.122873 | orchestrator | 2025-09-18 10:57:47.122884 | orchestrator | 2025-09-18 10:57:47.122896 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:57:47.122907 | orchestrator | Thursday 18 September 2025 10:57:45 +0000 (0:00:34.378) 0:02:35.542 **** 2025-09-18 10:57:47.122918 | orchestrator | =============================================================================== 2025-09-18 10:57:47.122929 | orchestrator | glance : Restart glance-api container ---------------------------------- 34.38s 2025-09-18 10:57:47.122940 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 25.68s 2025-09-18 10:57:47.122951 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.90s 2025-09-18 10:57:47.122962 | orchestrator | glance : Copying over glance-api.conf ----------------------------------- 5.20s 2025-09-18 10:57:47.122973 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 4.50s 2025-09-18 10:57:47.122985 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.40s 2025-09-18 10:57:47.123054 | orchestrator | glance : Copying over config.json files for services -------------------- 4.14s 2025-09-18 10:57:47.123067 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.09s 2025-09-18 10:57:47.123078 | orchestrator | glance : Copying over glance-swift.conf for glance_api ------------------ 4.00s 2025-09-18 10:57:47.123097 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.86s 2025-09-18 10:57:47.123108 | orchestrator | glance : Check glance containers ---------------------------------------- 3.79s 2025-09-18 10:57:47.123119 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 3.76s 2025-09-18 10:57:47.123130 | orchestrator | glance : Copying over glance-haproxy-tls.cfg ---------------------------- 3.71s 2025-09-18 10:57:47.123140 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 3.66s 2025-09-18 10:57:47.123151 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 3.65s 2025-09-18 10:57:47.123162 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 3.65s 2025-09-18 10:57:47.123173 | orchestrator | service-ks-register : glance | Creating projects ------------------------ 3.46s 2025-09-18 10:57:47.123184 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.40s 2025-09-18 10:57:47.123195 | orchestrator | glance : Ensuring config directories exist ------------------------------ 3.28s 2025-09-18 10:57:47.123205 | orchestrator | glance : Copying over glance-cache.conf for glance_api ------------------ 3.26s 2025-09-18 10:57:47.123216 | orchestrator | 2025-09-18 10:57:47 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:47.123227 | orchestrator | 2025-09-18 10:57:47 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:47.123238 | orchestrator | 2025-09-18 10:57:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:50.172606 | orchestrator | 2025-09-18 10:57:50 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:50.173149 | orchestrator | 2025-09-18 10:57:50 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:50.174121 | orchestrator | 2025-09-18 10:57:50 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:57:50.175071 | orchestrator | 2025-09-18 10:57:50 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:50.175097 | orchestrator | 2025-09-18 10:57:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:53.233865 | orchestrator | 2025-09-18 10:57:53 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:53.235837 | orchestrator | 2025-09-18 10:57:53 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:53.237126 | orchestrator | 2025-09-18 10:57:53 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:57:53.238294 | orchestrator | 2025-09-18 10:57:53 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:53.238337 | orchestrator | 2025-09-18 10:57:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:56.283765 | orchestrator | 2025-09-18 10:57:56 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:56.284429 | orchestrator | 2025-09-18 10:57:56 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:56.285482 | orchestrator | 2025-09-18 10:57:56 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:57:56.286415 | orchestrator | 2025-09-18 10:57:56 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:56.286442 | orchestrator | 2025-09-18 10:57:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:57:59.326717 | orchestrator | 2025-09-18 10:57:59 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:57:59.329192 | orchestrator | 2025-09-18 10:57:59 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:57:59.332387 | orchestrator | 2025-09-18 10:57:59 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:57:59.334133 | orchestrator | 2025-09-18 10:57:59 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:57:59.334155 | orchestrator | 2025-09-18 10:57:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:02.377047 | orchestrator | 2025-09-18 10:58:02 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:02.379532 | orchestrator | 2025-09-18 10:58:02 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:02.383908 | orchestrator | 2025-09-18 10:58:02 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:02.386426 | orchestrator | 2025-09-18 10:58:02 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:02.386580 | orchestrator | 2025-09-18 10:58:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:05.437862 | orchestrator | 2025-09-18 10:58:05 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:05.438105 | orchestrator | 2025-09-18 10:58:05 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:05.438207 | orchestrator | 2025-09-18 10:58:05 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:05.439014 | orchestrator | 2025-09-18 10:58:05 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:05.439055 | orchestrator | 2025-09-18 10:58:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:08.477327 | orchestrator | 2025-09-18 10:58:08 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:08.479182 | orchestrator | 2025-09-18 10:58:08 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:08.481497 | orchestrator | 2025-09-18 10:58:08 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:08.483824 | orchestrator | 2025-09-18 10:58:08 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:08.483847 | orchestrator | 2025-09-18 10:58:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:11.524234 | orchestrator | 2025-09-18 10:58:11 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:11.525333 | orchestrator | 2025-09-18 10:58:11 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:11.526526 | orchestrator | 2025-09-18 10:58:11 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:11.529162 | orchestrator | 2025-09-18 10:58:11 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:11.529546 | orchestrator | 2025-09-18 10:58:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:14.561062 | orchestrator | 2025-09-18 10:58:14 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:14.562112 | orchestrator | 2025-09-18 10:58:14 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:14.563643 | orchestrator | 2025-09-18 10:58:14 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:14.564914 | orchestrator | 2025-09-18 10:58:14 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:14.564992 | orchestrator | 2025-09-18 10:58:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:17.609887 | orchestrator | 2025-09-18 10:58:17 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:17.611112 | orchestrator | 2025-09-18 10:58:17 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:17.611943 | orchestrator | 2025-09-18 10:58:17 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:17.613195 | orchestrator | 2025-09-18 10:58:17 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:17.613219 | orchestrator | 2025-09-18 10:58:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:20.660303 | orchestrator | 2025-09-18 10:58:20 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:20.662360 | orchestrator | 2025-09-18 10:58:20 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:20.664514 | orchestrator | 2025-09-18 10:58:20 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:20.666819 | orchestrator | 2025-09-18 10:58:20 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:20.666842 | orchestrator | 2025-09-18 10:58:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:23.711189 | orchestrator | 2025-09-18 10:58:23 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:23.712285 | orchestrator | 2025-09-18 10:58:23 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:23.713616 | orchestrator | 2025-09-18 10:58:23 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:23.714827 | orchestrator | 2025-09-18 10:58:23 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:23.714856 | orchestrator | 2025-09-18 10:58:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:26.761356 | orchestrator | 2025-09-18 10:58:26 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:26.763311 | orchestrator | 2025-09-18 10:58:26 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:26.765107 | orchestrator | 2025-09-18 10:58:26 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:26.766434 | orchestrator | 2025-09-18 10:58:26 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:26.766634 | orchestrator | 2025-09-18 10:58:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:29.807747 | orchestrator | 2025-09-18 10:58:29 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:29.809415 | orchestrator | 2025-09-18 10:58:29 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:29.809451 | orchestrator | 2025-09-18 10:58:29 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:29.811938 | orchestrator | 2025-09-18 10:58:29 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:29.812295 | orchestrator | 2025-09-18 10:58:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:32.857022 | orchestrator | 2025-09-18 10:58:32 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:32.859144 | orchestrator | 2025-09-18 10:58:32 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state STARTED 2025-09-18 10:58:32.860256 | orchestrator | 2025-09-18 10:58:32 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:32.861081 | orchestrator | 2025-09-18 10:58:32 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:32.861134 | orchestrator | 2025-09-18 10:58:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:35.904782 | orchestrator | 2025-09-18 10:58:35 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:35.908734 | orchestrator | 2025-09-18 10:58:35 | INFO  | Task 9ff9b736-2921-486b-a74a-a73ae711ede6 is in state SUCCESS 2025-09-18 10:58:35.910785 | orchestrator | 2025-09-18 10:58:35.910808 | orchestrator | 2025-09-18 10:58:35.910815 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 10:58:35.910821 | orchestrator | 2025-09-18 10:58:35.910827 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 10:58:35.910832 | orchestrator | Thursday 18 September 2025 10:55:47 +0000 (0:00:00.244) 0:00:00.244 **** 2025-09-18 10:58:35.910838 | orchestrator | ok: [testbed-node-0] 2025-09-18 10:58:35.910845 | orchestrator | ok: [testbed-node-1] 2025-09-18 10:58:35.910850 | orchestrator | ok: [testbed-node-2] 2025-09-18 10:58:35.910855 | orchestrator | ok: [testbed-node-3] 2025-09-18 10:58:35.910861 | orchestrator | ok: [testbed-node-4] 2025-09-18 10:58:35.910866 | orchestrator | ok: [testbed-node-5] 2025-09-18 10:58:35.910871 | orchestrator | 2025-09-18 10:58:35.910876 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 10:58:35.910882 | orchestrator | Thursday 18 September 2025 10:55:48 +0000 (0:00:00.582) 0:00:00.826 **** 2025-09-18 10:58:35.910887 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-09-18 10:58:35.910893 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-09-18 10:58:35.910898 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-09-18 10:58:35.910903 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-09-18 10:58:35.910908 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-09-18 10:58:35.910913 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-09-18 10:58:35.910918 | orchestrator | 2025-09-18 10:58:35.910923 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-09-18 10:58:35.910929 | orchestrator | 2025-09-18 10:58:35.910964 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 10:58:35.910983 | orchestrator | Thursday 18 September 2025 10:55:48 +0000 (0:00:00.537) 0:00:01.364 **** 2025-09-18 10:58:35.910989 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:58:35.910995 | orchestrator | 2025-09-18 10:58:35.911001 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-09-18 10:58:35.911006 | orchestrator | Thursday 18 September 2025 10:55:49 +0000 (0:00:01.056) 0:00:02.421 **** 2025-09-18 10:58:35.911012 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-09-18 10:58:35.911017 | orchestrator | 2025-09-18 10:58:35.911022 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-09-18 10:58:35.911027 | orchestrator | Thursday 18 September 2025 10:55:53 +0000 (0:00:03.540) 0:00:05.961 **** 2025-09-18 10:58:35.911032 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-09-18 10:58:35.911038 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-09-18 10:58:35.911043 | orchestrator | 2025-09-18 10:58:35.911048 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-09-18 10:58:35.911053 | orchestrator | Thursday 18 September 2025 10:56:00 +0000 (0:00:06.699) 0:00:12.661 **** 2025-09-18 10:58:35.911059 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 10:58:35.911082 | orchestrator | 2025-09-18 10:58:35.911088 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-09-18 10:58:35.911093 | orchestrator | Thursday 18 September 2025 10:56:03 +0000 (0:00:03.316) 0:00:15.978 **** 2025-09-18 10:58:35.911115 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 10:58:35.911120 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-09-18 10:58:35.911125 | orchestrator | 2025-09-18 10:58:35.911130 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-09-18 10:58:35.911136 | orchestrator | Thursday 18 September 2025 10:56:07 +0000 (0:00:04.085) 0:00:20.064 **** 2025-09-18 10:58:35.911141 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 10:58:35.911146 | orchestrator | 2025-09-18 10:58:35.911151 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-09-18 10:58:35.911156 | orchestrator | Thursday 18 September 2025 10:56:11 +0000 (0:00:03.665) 0:00:23.729 **** 2025-09-18 10:58:35.911161 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-09-18 10:58:35.911167 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-09-18 10:58:35.911172 | orchestrator | 2025-09-18 10:58:35.911177 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-09-18 10:58:35.911182 | orchestrator | Thursday 18 September 2025 10:56:19 +0000 (0:00:08.032) 0:00:31.762 **** 2025-09-18 10:58:35.911190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911207 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911213 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911231 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911238 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911256 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911262 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911271 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911277 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911288 | orchestrator | 2025-09-18 10:58:35.911297 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 10:58:35.911302 | orchestrator | Thursday 18 September 2025 10:56:22 +0000 (0:00:02.963) 0:00:34.725 **** 2025-09-18 10:58:35.911307 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.911313 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.911318 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.911323 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.911329 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.911334 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.911339 | orchestrator | 2025-09-18 10:58:35.911344 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 10:58:35.911350 | orchestrator | Thursday 18 September 2025 10:56:22 +0000 (0:00:00.685) 0:00:35.410 **** 2025-09-18 10:58:35.911355 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.911360 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.911365 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.911371 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:58:35.911376 | orchestrator | 2025-09-18 10:58:35.911381 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-09-18 10:58:35.911387 | orchestrator | Thursday 18 September 2025 10:56:24 +0000 (0:00:01.254) 0:00:36.665 **** 2025-09-18 10:58:35.911393 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-09-18 10:58:35.911398 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-09-18 10:58:35.911408 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-09-18 10:58:35.911414 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-09-18 10:58:35.911420 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-09-18 10:58:35.911425 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-09-18 10:58:35.911431 | orchestrator | 2025-09-18 10:58:35.911436 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-09-18 10:58:35.911442 | orchestrator | Thursday 18 September 2025 10:56:26 +0000 (0:00:02.286) 0:00:38.952 **** 2025-09-18 10:58:35.911449 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911457 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911463 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911473 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911480 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911489 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-09-18 10:58:35.911581 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911587 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911599 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911611 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911618 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911624 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-09-18 10:58:35.911630 | orchestrator | 2025-09-18 10:58:35.911636 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-09-18 10:58:35.911642 | orchestrator | Thursday 18 September 2025 10:56:30 +0000 (0:00:04.115) 0:00:43.067 **** 2025-09-18 10:58:35.911648 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:58:35.911654 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:58:35.911660 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-09-18 10:58:35.911666 | orchestrator | 2025-09-18 10:58:35.911672 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-09-18 10:58:35.911678 | orchestrator | Thursday 18 September 2025 10:56:32 +0000 (0:00:02.112) 0:00:45.180 **** 2025-09-18 10:58:35.911683 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-09-18 10:58:35.911689 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-09-18 10:58:35.911695 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-09-18 10:58:35.911700 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:58:35.911706 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:58:35.911715 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-09-18 10:58:35.911726 | orchestrator | 2025-09-18 10:58:35.911732 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-09-18 10:58:35.911737 | orchestrator | Thursday 18 September 2025 10:56:35 +0000 (0:00:03.236) 0:00:48.416 **** 2025-09-18 10:58:35.911744 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-09-18 10:58:35.911749 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-09-18 10:58:35.911755 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-09-18 10:58:35.911760 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-09-18 10:58:35.911766 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-09-18 10:58:35.911771 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-09-18 10:58:35.911776 | orchestrator | 2025-09-18 10:58:35.911781 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-09-18 10:58:35.911787 | orchestrator | Thursday 18 September 2025 10:56:36 +0000 (0:00:01.107) 0:00:49.524 **** 2025-09-18 10:58:35.911792 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.911797 | orchestrator | 2025-09-18 10:58:35.911802 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-09-18 10:58:35.911807 | orchestrator | Thursday 18 September 2025 10:56:37 +0000 (0:00:00.159) 0:00:49.683 **** 2025-09-18 10:58:35.911812 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.911818 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.911823 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.911828 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.911833 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.911838 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.911843 | orchestrator | 2025-09-18 10:58:35.911848 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 10:58:35.911854 | orchestrator | Thursday 18 September 2025 10:56:37 +0000 (0:00:00.773) 0:00:50.456 **** 2025-09-18 10:58:35.911860 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 10:58:35.911865 | orchestrator | 2025-09-18 10:58:35.911871 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-09-18 10:58:35.911876 | orchestrator | Thursday 18 September 2025 10:56:39 +0000 (0:00:01.258) 0:00:51.715 **** 2025-09-18 10:58:35.911881 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911887 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.911918 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911923 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911929 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.911976 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912143 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912153 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912158 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912164 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912169 | orchestrator | 2025-09-18 10:58:35.912174 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-09-18 10:58:35.912180 | orchestrator | Thursday 18 September 2025 10:56:42 +0000 (0:00:02.898) 0:00:54.614 **** 2025-09-18 10:58:35.912185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912207 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.912213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912218 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912224 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.912229 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912243 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.912248 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912262 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.912268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912273 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912279 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.912284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912343 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.912348 | orchestrator | 2025-09-18 10:58:35.912353 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-09-18 10:58:35.912359 | orchestrator | Thursday 18 September 2025 10:56:43 +0000 (0:00:01.189) 0:00:55.803 **** 2025-09-18 10:58:35.912367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912373 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912379 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.912384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912401 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.912406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912421 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.912426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912437 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912447 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.912452 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912461 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912466 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.912472 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912477 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.912482 | orchestrator | 2025-09-18 10:58:35.912488 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-09-18 10:58:35.912493 | orchestrator | Thursday 18 September 2025 10:56:44 +0000 (0:00:01.372) 0:00:57.176 **** 2025-09-18 10:58:35.912499 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912512 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912538 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912543 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912549 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912559 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912564 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912570 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912581 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912587 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912597 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912602 | orchestrator | 2025-09-18 10:58:35.912608 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-09-18 10:58:35.912613 | orchestrator | Thursday 18 September 2025 10:56:47 +0000 (0:00:02.776) 0:00:59.952 **** 2025-09-18 10:58:35.912618 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 10:58:35.912624 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.912629 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 10:58:35.912634 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.912639 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-09-18 10:58:35.912644 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.912650 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 10:58:35.912655 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 10:58:35.912660 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-09-18 10:58:35.912665 | orchestrator | 2025-09-18 10:58:35.912670 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-09-18 10:58:35.912675 | orchestrator | Thursday 18 September 2025 10:56:49 +0000 (0:00:01.862) 0:01:01.815 **** 2025-09-18 10:58:35.912681 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912792 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912799 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912809 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.912814 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912823 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912849 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912854 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912860 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912865 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.912871 | orchestrator | 2025-09-18 10:58:35.912876 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-09-18 10:58:35.912881 | orchestrator | Thursday 18 September 2025 10:56:56 +0000 (0:00:07.749) 0:01:09.564 **** 2025-09-18 10:58:35.912889 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.912894 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.912899 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.912904 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:58:35.912909 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:58:35.912915 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:58:35.912924 | orchestrator | 2025-09-18 10:58:35.912929 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-09-18 10:58:35.912959 | orchestrator | Thursday 18 September 2025 10:56:58 +0000 (0:00:01.912) 0:01:11.477 **** 2025-09-18 10:58:35.912968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912974 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912980 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.912985 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.912994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-09-18 10:58:35.913006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913012 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.913017 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.913022 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.913028 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913033 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913039 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.913044 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913072 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.913083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-09-18 10:58:35.913094 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.913099 | orchestrator | 2025-09-18 10:58:35.913105 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-09-18 10:58:35.913110 | orchestrator | Thursday 18 September 2025 10:57:00 +0000 (0:00:01.174) 0:01:12.651 **** 2025-09-18 10:58:35.913115 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.913120 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.913126 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.913131 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.913136 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.913141 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.913146 | orchestrator | 2025-09-18 10:58:35.913151 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-09-18 10:58:35.913156 | orchestrator | Thursday 18 September 2025 10:57:00 +0000 (0:00:00.569) 0:01:13.221 **** 2025-09-18 10:58:35.913162 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.913167 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913182 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.913188 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-api:2024.2', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-09-18 10:58:35.913194 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913199 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-volume:2024.2', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913205 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913218 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913227 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-scheduler:2024.2', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913250 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913255 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/cinder-backup:2024.2', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-09-18 10:58:35.913261 | orchestrator | 2025-09-18 10:58:35.913266 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-09-18 10:58:35.913276 | orchestrator | Thursday 18 September 2025 10:57:03 +0000 (0:00:02.467) 0:01:15.688 **** 2025-09-18 10:58:35.913281 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.913286 | orchestrator | skipping: [testbed-node-1] 2025-09-18 10:58:35.913291 | orchestrator | skipping: [testbed-node-2] 2025-09-18 10:58:35.913296 | orchestrator | skipping: [testbed-node-3] 2025-09-18 10:58:35.913301 | orchestrator | skipping: [testbed-node-4] 2025-09-18 10:58:35.913306 | orchestrator | skipping: [testbed-node-5] 2025-09-18 10:58:35.913311 | orchestrator | 2025-09-18 10:58:35.913317 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-09-18 10:58:35.913322 | orchestrator | Thursday 18 September 2025 10:57:03 +0000 (0:00:00.616) 0:01:16.305 **** 2025-09-18 10:58:35.913327 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:58:35.913332 | orchestrator | 2025-09-18 10:58:35.913337 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-09-18 10:58:35.913342 | orchestrator | Thursday 18 September 2025 10:57:06 +0000 (0:00:02.728) 0:01:19.033 **** 2025-09-18 10:58:35.913347 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:58:35.913353 | orchestrator | 2025-09-18 10:58:35.913358 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-09-18 10:58:35.913363 | orchestrator | Thursday 18 September 2025 10:57:08 +0000 (0:00:02.250) 0:01:21.283 **** 2025-09-18 10:58:35.913368 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:58:35.913373 | orchestrator | 2025-09-18 10:58:35.913378 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913383 | orchestrator | Thursday 18 September 2025 10:57:26 +0000 (0:00:18.155) 0:01:39.439 **** 2025-09-18 10:58:35.913388 | orchestrator | 2025-09-18 10:58:35.913396 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913401 | orchestrator | Thursday 18 September 2025 10:57:26 +0000 (0:00:00.067) 0:01:39.506 **** 2025-09-18 10:58:35.913406 | orchestrator | 2025-09-18 10:58:35.913412 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913417 | orchestrator | Thursday 18 September 2025 10:57:26 +0000 (0:00:00.065) 0:01:39.572 **** 2025-09-18 10:58:35.913422 | orchestrator | 2025-09-18 10:58:35.913427 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913436 | orchestrator | Thursday 18 September 2025 10:57:27 +0000 (0:00:00.067) 0:01:39.639 **** 2025-09-18 10:58:35.913442 | orchestrator | 2025-09-18 10:58:35.913448 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913453 | orchestrator | Thursday 18 September 2025 10:57:27 +0000 (0:00:00.065) 0:01:39.704 **** 2025-09-18 10:58:35.913459 | orchestrator | 2025-09-18 10:58:35.913465 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-09-18 10:58:35.913471 | orchestrator | Thursday 18 September 2025 10:57:27 +0000 (0:00:00.066) 0:01:39.771 **** 2025-09-18 10:58:35.913477 | orchestrator | 2025-09-18 10:58:35.913482 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-09-18 10:58:35.913488 | orchestrator | Thursday 18 September 2025 10:57:27 +0000 (0:00:00.068) 0:01:39.840 **** 2025-09-18 10:58:35.913494 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:58:35.913499 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:58:35.913505 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:58:35.913511 | orchestrator | 2025-09-18 10:58:35.913517 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-09-18 10:58:35.913522 | orchestrator | Thursday 18 September 2025 10:57:48 +0000 (0:00:21.661) 0:02:01.502 **** 2025-09-18 10:58:35.913528 | orchestrator | changed: [testbed-node-0] 2025-09-18 10:58:35.913534 | orchestrator | changed: [testbed-node-2] 2025-09-18 10:58:35.913540 | orchestrator | changed: [testbed-node-1] 2025-09-18 10:58:35.913545 | orchestrator | 2025-09-18 10:58:35.913551 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-09-18 10:58:35.913557 | orchestrator | Thursday 18 September 2025 10:57:54 +0000 (0:00:05.585) 0:02:07.088 **** 2025-09-18 10:58:35.913566 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:58:35.913572 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:58:35.913578 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:58:35.913584 | orchestrator | 2025-09-18 10:58:35.913589 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-09-18 10:58:35.913595 | orchestrator | Thursday 18 September 2025 10:58:28 +0000 (0:00:34.271) 0:02:41.359 **** 2025-09-18 10:58:35.913601 | orchestrator | changed: [testbed-node-3] 2025-09-18 10:58:35.913607 | orchestrator | changed: [testbed-node-4] 2025-09-18 10:58:35.913613 | orchestrator | changed: [testbed-node-5] 2025-09-18 10:58:35.913618 | orchestrator | 2025-09-18 10:58:35.913624 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-09-18 10:58:35.913630 | orchestrator | Thursday 18 September 2025 10:58:34 +0000 (0:00:05.768) 0:02:47.128 **** 2025-09-18 10:58:35.913636 | orchestrator | skipping: [testbed-node-0] 2025-09-18 10:58:35.913642 | orchestrator | 2025-09-18 10:58:35.913647 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 10:58:35.913653 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-09-18 10:58:35.913660 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-18 10:58:35.913666 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-09-18 10:58:35.913671 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 10:58:35.913677 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 10:58:35.913683 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-09-18 10:58:35.913689 | orchestrator | 2025-09-18 10:58:35.913695 | orchestrator | 2025-09-18 10:58:35.913700 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 10:58:35.913706 | orchestrator | Thursday 18 September 2025 10:58:35 +0000 (0:00:00.633) 0:02:47.761 **** 2025-09-18 10:58:35.913712 | orchestrator | =============================================================================== 2025-09-18 10:58:35.913718 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 34.27s 2025-09-18 10:58:35.913723 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 21.66s 2025-09-18 10:58:35.913729 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 18.16s 2025-09-18 10:58:35.913735 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.03s 2025-09-18 10:58:35.913740 | orchestrator | cinder : Copying over cinder.conf --------------------------------------- 7.75s 2025-09-18 10:58:35.913746 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.70s 2025-09-18 10:58:35.913752 | orchestrator | cinder : Restart cinder-backup container -------------------------------- 5.77s 2025-09-18 10:58:35.913758 | orchestrator | cinder : Restart cinder-scheduler container ----------------------------- 5.59s 2025-09-18 10:58:35.913766 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 4.12s 2025-09-18 10:58:35.913772 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 4.09s 2025-09-18 10:58:35.913778 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.67s 2025-09-18 10:58:35.913783 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.54s 2025-09-18 10:58:35.913789 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.32s 2025-09-18 10:58:35.913802 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.24s 2025-09-18 10:58:35.913807 | orchestrator | cinder : Ensuring config directories exist ------------------------------ 2.96s 2025-09-18 10:58:35.913812 | orchestrator | service-cert-copy : cinder | Copying over extra CA certificates --------- 2.90s 2025-09-18 10:58:35.913817 | orchestrator | cinder : Copying over config.json files for services -------------------- 2.78s 2025-09-18 10:58:35.913823 | orchestrator | cinder : Creating Cinder database --------------------------------------- 2.73s 2025-09-18 10:58:35.913828 | orchestrator | cinder : Check cinder containers ---------------------------------------- 2.47s 2025-09-18 10:58:35.913833 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 2.29s 2025-09-18 10:58:35.913883 | orchestrator | 2025-09-18 10:58:35 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:35.915437 | orchestrator | 2025-09-18 10:58:35 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:35.915453 | orchestrator | 2025-09-18 10:58:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:38.973283 | orchestrator | 2025-09-18 10:58:38 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:38.974492 | orchestrator | 2025-09-18 10:58:38 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:38.975667 | orchestrator | 2025-09-18 10:58:38 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:38.977178 | orchestrator | 2025-09-18 10:58:38 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:38.977201 | orchestrator | 2025-09-18 10:58:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:42.028718 | orchestrator | 2025-09-18 10:58:42 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:42.029489 | orchestrator | 2025-09-18 10:58:42 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:42.032407 | orchestrator | 2025-09-18 10:58:42 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:42.034671 | orchestrator | 2025-09-18 10:58:42 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:42.035107 | orchestrator | 2025-09-18 10:58:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:45.072731 | orchestrator | 2025-09-18 10:58:45 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:45.073221 | orchestrator | 2025-09-18 10:58:45 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:45.074742 | orchestrator | 2025-09-18 10:58:45 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:45.075906 | orchestrator | 2025-09-18 10:58:45 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:45.075957 | orchestrator | 2025-09-18 10:58:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:48.115501 | orchestrator | 2025-09-18 10:58:48 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:48.116678 | orchestrator | 2025-09-18 10:58:48 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:48.118188 | orchestrator | 2025-09-18 10:58:48 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:48.119846 | orchestrator | 2025-09-18 10:58:48 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:48.119868 | orchestrator | 2025-09-18 10:58:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:51.172189 | orchestrator | 2025-09-18 10:58:51 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:51.173593 | orchestrator | 2025-09-18 10:58:51 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:51.176746 | orchestrator | 2025-09-18 10:58:51 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:51.178209 | orchestrator | 2025-09-18 10:58:51 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:51.178398 | orchestrator | 2025-09-18 10:58:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:54.234172 | orchestrator | 2025-09-18 10:58:54 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:54.236403 | orchestrator | 2025-09-18 10:58:54 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:54.239038 | orchestrator | 2025-09-18 10:58:54 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:54.241591 | orchestrator | 2025-09-18 10:58:54 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:54.241673 | orchestrator | 2025-09-18 10:58:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:58:57.291571 | orchestrator | 2025-09-18 10:58:57 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:58:57.294101 | orchestrator | 2025-09-18 10:58:57 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:58:57.296141 | orchestrator | 2025-09-18 10:58:57 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:58:57.300184 | orchestrator | 2025-09-18 10:58:57 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:58:57.300315 | orchestrator | 2025-09-18 10:58:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:00.349972 | orchestrator | 2025-09-18 10:59:00 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:00.350747 | orchestrator | 2025-09-18 10:59:00 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:00.352503 | orchestrator | 2025-09-18 10:59:00 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:00.354709 | orchestrator | 2025-09-18 10:59:00 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:00.354735 | orchestrator | 2025-09-18 10:59:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:03.396506 | orchestrator | 2025-09-18 10:59:03 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:03.396597 | orchestrator | 2025-09-18 10:59:03 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:03.397330 | orchestrator | 2025-09-18 10:59:03 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:03.398480 | orchestrator | 2025-09-18 10:59:03 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:03.398510 | orchestrator | 2025-09-18 10:59:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:06.438828 | orchestrator | 2025-09-18 10:59:06 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:06.441946 | orchestrator | 2025-09-18 10:59:06 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:06.444579 | orchestrator | 2025-09-18 10:59:06 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:06.446857 | orchestrator | 2025-09-18 10:59:06 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:06.447416 | orchestrator | 2025-09-18 10:59:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:09.491126 | orchestrator | 2025-09-18 10:59:09 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:09.492194 | orchestrator | 2025-09-18 10:59:09 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:09.492763 | orchestrator | 2025-09-18 10:59:09 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:09.493436 | orchestrator | 2025-09-18 10:59:09 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:09.493960 | orchestrator | 2025-09-18 10:59:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:12.537881 | orchestrator | 2025-09-18 10:59:12 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:12.538319 | orchestrator | 2025-09-18 10:59:12 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:12.538452 | orchestrator | 2025-09-18 10:59:12 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:12.539127 | orchestrator | 2025-09-18 10:59:12 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:12.539149 | orchestrator | 2025-09-18 10:59:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:15.595546 | orchestrator | 2025-09-18 10:59:15 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:15.596275 | orchestrator | 2025-09-18 10:59:15 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:15.599590 | orchestrator | 2025-09-18 10:59:15 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:15.601412 | orchestrator | 2025-09-18 10:59:15 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:15.601435 | orchestrator | 2025-09-18 10:59:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:18.651001 | orchestrator | 2025-09-18 10:59:18 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:18.652538 | orchestrator | 2025-09-18 10:59:18 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:18.653947 | orchestrator | 2025-09-18 10:59:18 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:18.655191 | orchestrator | 2025-09-18 10:59:18 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:18.655453 | orchestrator | 2025-09-18 10:59:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:21.694776 | orchestrator | 2025-09-18 10:59:21 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:21.695585 | orchestrator | 2025-09-18 10:59:21 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:21.697288 | orchestrator | 2025-09-18 10:59:21 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:21.698659 | orchestrator | 2025-09-18 10:59:21 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:21.698900 | orchestrator | 2025-09-18 10:59:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:24.749962 | orchestrator | 2025-09-18 10:59:24 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:24.750577 | orchestrator | 2025-09-18 10:59:24 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:24.752033 | orchestrator | 2025-09-18 10:59:24 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:24.753440 | orchestrator | 2025-09-18 10:59:24 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state STARTED 2025-09-18 10:59:24.753463 | orchestrator | 2025-09-18 10:59:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:27.795675 | orchestrator | 2025-09-18 10:59:27 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:27.796991 | orchestrator | 2025-09-18 10:59:27 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:27.799334 | orchestrator | 2025-09-18 10:59:27 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:27.800486 | orchestrator | 2025-09-18 10:59:27 | INFO  | Task 5c373241-2587-4fad-8641-365605684f2e is in state SUCCESS 2025-09-18 10:59:27.800662 | orchestrator | 2025-09-18 10:59:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:30.867007 | orchestrator | 2025-09-18 10:59:30 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:30.869398 | orchestrator | 2025-09-18 10:59:30 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:30.871857 | orchestrator | 2025-09-18 10:59:30 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:30.873203 | orchestrator | 2025-09-18 10:59:30 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:30.873225 | orchestrator | 2025-09-18 10:59:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:33.918406 | orchestrator | 2025-09-18 10:59:33 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:33.919963 | orchestrator | 2025-09-18 10:59:33 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:33.921389 | orchestrator | 2025-09-18 10:59:33 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:33.922910 | orchestrator | 2025-09-18 10:59:33 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:33.923118 | orchestrator | 2025-09-18 10:59:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:36.968250 | orchestrator | 2025-09-18 10:59:36 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:36.969588 | orchestrator | 2025-09-18 10:59:36 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:36.971293 | orchestrator | 2025-09-18 10:59:36 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:36.972626 | orchestrator | 2025-09-18 10:59:36 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:36.972673 | orchestrator | 2025-09-18 10:59:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:40.025449 | orchestrator | 2025-09-18 10:59:40 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:40.026959 | orchestrator | 2025-09-18 10:59:40 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:40.029112 | orchestrator | 2025-09-18 10:59:40 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:40.029376 | orchestrator | 2025-09-18 10:59:40 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:40.029576 | orchestrator | 2025-09-18 10:59:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:43.075497 | orchestrator | 2025-09-18 10:59:43 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:43.076609 | orchestrator | 2025-09-18 10:59:43 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:43.078749 | orchestrator | 2025-09-18 10:59:43 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:43.079892 | orchestrator | 2025-09-18 10:59:43 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:43.079916 | orchestrator | 2025-09-18 10:59:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:46.124566 | orchestrator | 2025-09-18 10:59:46 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:46.125547 | orchestrator | 2025-09-18 10:59:46 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:46.127551 | orchestrator | 2025-09-18 10:59:46 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:46.128921 | orchestrator | 2025-09-18 10:59:46 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:46.128954 | orchestrator | 2025-09-18 10:59:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:49.180723 | orchestrator | 2025-09-18 10:59:49 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:49.183183 | orchestrator | 2025-09-18 10:59:49 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:49.185546 | orchestrator | 2025-09-18 10:59:49 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:49.187031 | orchestrator | 2025-09-18 10:59:49 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:49.187980 | orchestrator | 2025-09-18 10:59:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:52.234516 | orchestrator | 2025-09-18 10:59:52 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:52.235504 | orchestrator | 2025-09-18 10:59:52 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:52.236970 | orchestrator | 2025-09-18 10:59:52 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:52.238186 | orchestrator | 2025-09-18 10:59:52 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:52.238217 | orchestrator | 2025-09-18 10:59:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:55.284812 | orchestrator | 2025-09-18 10:59:55 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:55.285621 | orchestrator | 2025-09-18 10:59:55 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:55.286362 | orchestrator | 2025-09-18 10:59:55 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state STARTED 2025-09-18 10:59:55.287713 | orchestrator | 2025-09-18 10:59:55 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:55.287761 | orchestrator | 2025-09-18 10:59:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 10:59:58.339839 | orchestrator | 2025-09-18 10:59:58 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 10:59:58.340989 | orchestrator | 2025-09-18 10:59:58 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 10:59:58.342265 | orchestrator | 2025-09-18 10:59:58 | INFO  | Task 659cb30e-a2f4-4057-847c-2dbc9859799f is in state SUCCESS 2025-09-18 10:59:58.343672 | orchestrator | 2025-09-18 10:59:58 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state STARTED 2025-09-18 10:59:58.343785 | orchestrator | 2025-09-18 10:59:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:01.385893 | orchestrator | 2025-09-18 11:00:01 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:01.386784 | orchestrator | 2025-09-18 11:00:01 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:01.392435 | orchestrator | 2025-09-18 11:00:01 | INFO  | Task 62e2b260-5b43-4372-8f89-cd0ed910c516 is in state SUCCESS 2025-09-18 11:00:01.395614 | orchestrator | 2025-09-18 11:00:01.395649 | orchestrator | 2025-09-18 11:00:01.395662 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-09-18 11:00:01.395674 | orchestrator | 2025-09-18 11:00:01.395686 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-09-18 11:00:01.395698 | orchestrator | Thursday 18 September 2025 10:52:55 +0000 (0:00:00.169) 0:00:00.169 **** 2025-09-18 11:00:01.395778 | orchestrator | changed: [localhost] 2025-09-18 11:00:01.395794 | orchestrator | 2025-09-18 11:00:01.395833 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-09-18 11:00:01.395856 | orchestrator | Thursday 18 September 2025 10:52:55 +0000 (0:00:00.930) 0:00:01.100 **** 2025-09-18 11:00:01.395895 | orchestrator | FAILED - RETRYING: [localhost]: Download ironic-agent initramfs (3 retries left). 2025-09-18 11:00:01.395907 | orchestrator | 2025-09-18 11:00:01.395918 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.395929 | orchestrator | 2025-09-18 11:00:01.395966 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.395977 | orchestrator | 2025-09-18 11:00:01.395988 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.395999 | orchestrator | 2025-09-18 11:00:01.396011 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.396022 | orchestrator | 2025-09-18 11:00:01.396033 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.396044 | orchestrator | 2025-09-18 11:00:01.396055 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.396066 | orchestrator | 2025-09-18 11:00:01.396077 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.396088 | orchestrator | 2025-09-18 11:00:01.396111 | orchestrator | STILL ALIVE [task 'Download ironic-agent initramfs' is running] **************** 2025-09-18 11:00:01.396123 | orchestrator | changed: [localhost] 2025-09-18 11:00:01.396134 | orchestrator | 2025-09-18 11:00:01.396145 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-09-18 11:00:01.396199 | orchestrator | Thursday 18 September 2025 10:59:13 +0000 (0:06:17.265) 0:06:18.365 **** 2025-09-18 11:00:01.396211 | orchestrator | changed: [localhost] 2025-09-18 11:00:01.396224 | orchestrator | 2025-09-18 11:00:01.396318 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 11:00:01.396340 | orchestrator | 2025-09-18 11:00:01.396358 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 11:00:01.396377 | orchestrator | Thursday 18 September 2025 10:59:25 +0000 (0:00:12.751) 0:06:31.117 **** 2025-09-18 11:00:01.396396 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.396413 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:00:01.396426 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:00:01.396439 | orchestrator | 2025-09-18 11:00:01.396451 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 11:00:01.396465 | orchestrator | Thursday 18 September 2025 10:59:26 +0000 (0:00:00.712) 0:06:31.829 **** 2025-09-18 11:00:01.396477 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-09-18 11:00:01.396491 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-09-18 11:00:01.396505 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-09-18 11:00:01.396547 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-09-18 11:00:01.396560 | orchestrator | 2025-09-18 11:00:01.396573 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-09-18 11:00:01.396584 | orchestrator | skipping: no hosts matched 2025-09-18 11:00:01.396597 | orchestrator | 2025-09-18 11:00:01.396608 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 11:00:01.396620 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.396634 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.396647 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.396659 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.396670 | orchestrator | 2025-09-18 11:00:01.396681 | orchestrator | 2025-09-18 11:00:01.396692 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 11:00:01.396703 | orchestrator | Thursday 18 September 2025 10:59:27 +0000 (0:00:00.537) 0:06:32.366 **** 2025-09-18 11:00:01.396732 | orchestrator | =============================================================================== 2025-09-18 11:00:01.396743 | orchestrator | Download ironic-agent initramfs --------------------------------------- 377.27s 2025-09-18 11:00:01.396754 | orchestrator | Download ironic-agent kernel ------------------------------------------- 12.75s 2025-09-18 11:00:01.396779 | orchestrator | Ensure the destination directory exists --------------------------------- 0.93s 2025-09-18 11:00:01.396790 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.71s 2025-09-18 11:00:01.396801 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.54s 2025-09-18 11:00:01.396812 | orchestrator | 2025-09-18 11:00:01.396823 | orchestrator | 2025-09-18 11:00:01.396834 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 11:00:01.396845 | orchestrator | 2025-09-18 11:00:01.396883 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 11:00:01.396895 | orchestrator | Thursday 18 September 2025 10:58:39 +0000 (0:00:00.184) 0:00:00.184 **** 2025-09-18 11:00:01.396906 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.396917 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:00:01.396928 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:00:01.396988 | orchestrator | 2025-09-18 11:00:01.396999 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 11:00:01.397025 | orchestrator | Thursday 18 September 2025 10:58:39 +0000 (0:00:00.308) 0:00:00.492 **** 2025-09-18 11:00:01.397037 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-09-18 11:00:01.397048 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-09-18 11:00:01.397059 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-09-18 11:00:01.397070 | orchestrator | 2025-09-18 11:00:01.397081 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-09-18 11:00:01.397092 | orchestrator | 2025-09-18 11:00:01.397103 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-09-18 11:00:01.397114 | orchestrator | Thursday 18 September 2025 10:58:40 +0000 (0:00:00.707) 0:00:01.200 **** 2025-09-18 11:00:01.397125 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.397136 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:00:01.397147 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:00:01.397158 | orchestrator | 2025-09-18 11:00:01.397169 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 11:00:01.397180 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.397205 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.397217 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:00:01.397228 | orchestrator | 2025-09-18 11:00:01.397238 | orchestrator | 2025-09-18 11:00:01.397249 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 11:00:01.397260 | orchestrator | Thursday 18 September 2025 10:59:57 +0000 (0:01:16.763) 0:01:17.964 **** 2025-09-18 11:00:01.397271 | orchestrator | =============================================================================== 2025-09-18 11:00:01.397282 | orchestrator | Waiting for Nova public port to be UP ---------------------------------- 76.76s 2025-09-18 11:00:01.397293 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-09-18 11:00:01.397304 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.31s 2025-09-18 11:00:01.397315 | orchestrator | 2025-09-18 11:00:01.397326 | orchestrator | 2025-09-18 11:00:01.397337 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 11:00:01.397348 | orchestrator | 2025-09-18 11:00:01.397359 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 11:00:01.397369 | orchestrator | Thursday 18 September 2025 10:57:50 +0000 (0:00:00.269) 0:00:00.269 **** 2025-09-18 11:00:01.397380 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.397391 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:00:01.397402 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:00:01.397413 | orchestrator | 2025-09-18 11:00:01.397424 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 11:00:01.397435 | orchestrator | Thursday 18 September 2025 10:57:50 +0000 (0:00:00.418) 0:00:00.687 **** 2025-09-18 11:00:01.397446 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-09-18 11:00:01.397457 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-09-18 11:00:01.397468 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-09-18 11:00:01.397479 | orchestrator | 2025-09-18 11:00:01.397490 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-09-18 11:00:01.397500 | orchestrator | 2025-09-18 11:00:01.397511 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-18 11:00:01.397522 | orchestrator | Thursday 18 September 2025 10:57:51 +0000 (0:00:00.513) 0:00:01.201 **** 2025-09-18 11:00:01.397533 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:00:01.397544 | orchestrator | 2025-09-18 11:00:01.397555 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-09-18 11:00:01.397566 | orchestrator | Thursday 18 September 2025 10:57:51 +0000 (0:00:00.506) 0:00:01.708 **** 2025-09-18 11:00:01.397581 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397604 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397632 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397644 | orchestrator | 2025-09-18 11:00:01.397656 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-09-18 11:00:01.397667 | orchestrator | Thursday 18 September 2025 10:57:52 +0000 (0:00:00.846) 0:00:02.554 **** 2025-09-18 11:00:01.397678 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-09-18 11:00:01.397689 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-09-18 11:00:01.397700 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 11:00:01.397729 | orchestrator | 2025-09-18 11:00:01.397740 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-09-18 11:00:01.397751 | orchestrator | Thursday 18 September 2025 10:57:53 +0000 (0:00:00.891) 0:00:03.446 **** 2025-09-18 11:00:01.397762 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:00:01.397773 | orchestrator | 2025-09-18 11:00:01.397784 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-09-18 11:00:01.397795 | orchestrator | Thursday 18 September 2025 10:57:54 +0000 (0:00:00.706) 0:00:04.153 **** 2025-09-18 11:00:01.397807 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397819 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397831 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.397849 | orchestrator | 2025-09-18 11:00:01.397860 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-09-18 11:00:01.397871 | orchestrator | Thursday 18 September 2025 10:57:55 +0000 (0:00:01.618) 0:00:05.771 **** 2025-09-18 11:00:01.397897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.397909 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.397921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.397933 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.397945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.397956 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.397967 | orchestrator | 2025-09-18 11:00:01.397978 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-09-18 11:00:01.397989 | orchestrator | Thursday 18 September 2025 10:57:56 +0000 (0:00:00.452) 0:00:06.223 **** 2025-09-18 11:00:01.398000 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.398012 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.398086 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.398118 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.398145 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-09-18 11:00:01.398167 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.398189 | orchestrator | 2025-09-18 11:00:01.398209 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-09-18 11:00:01.398233 | orchestrator | Thursday 18 September 2025 10:57:57 +0000 (0:00:00.838) 0:00:07.061 **** 2025-09-18 11:00:01.398245 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398257 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398281 | orchestrator | 2025-09-18 11:00:01.398293 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-09-18 11:00:01.398304 | orchestrator | Thursday 18 September 2025 10:57:58 +0000 (0:00:01.466) 0:00:08.528 **** 2025-09-18 11:00:01.398316 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398360 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.398372 | orchestrator | 2025-09-18 11:00:01.398383 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-09-18 11:00:01.398394 | orchestrator | Thursday 18 September 2025 10:58:00 +0000 (0:00:01.599) 0:00:10.128 **** 2025-09-18 11:00:01.398405 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.398417 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.398428 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.398439 | orchestrator | 2025-09-18 11:00:01.398449 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-09-18 11:00:01.398460 | orchestrator | Thursday 18 September 2025 10:58:00 +0000 (0:00:00.548) 0:00:10.676 **** 2025-09-18 11:00:01.398471 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 11:00:01.398484 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 11:00:01.398504 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-09-18 11:00:01.398521 | orchestrator | 2025-09-18 11:00:01.398540 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-09-18 11:00:01.398559 | orchestrator | Thursday 18 September 2025 10:58:02 +0000 (0:00:01.433) 0:00:12.110 **** 2025-09-18 11:00:01.398579 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 11:00:01.398591 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 11:00:01.398602 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-09-18 11:00:01.398613 | orchestrator | 2025-09-18 11:00:01.398624 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-09-18 11:00:01.398635 | orchestrator | Thursday 18 September 2025 10:58:03 +0000 (0:00:01.440) 0:00:13.550 **** 2025-09-18 11:00:01.398646 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 11:00:01.398657 | orchestrator | 2025-09-18 11:00:01.398668 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-09-18 11:00:01.398679 | orchestrator | Thursday 18 September 2025 10:58:04 +0000 (0:00:00.756) 0:00:14.307 **** 2025-09-18 11:00:01.398698 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-09-18 11:00:01.398732 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-09-18 11:00:01.398743 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.398755 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:00:01.398766 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:00:01.398777 | orchestrator | 2025-09-18 11:00:01.398788 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-09-18 11:00:01.398800 | orchestrator | Thursday 18 September 2025 10:58:05 +0000 (0:00:00.750) 0:00:15.057 **** 2025-09-18 11:00:01.398811 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.398822 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.398833 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.398844 | orchestrator | 2025-09-18 11:00:01.398855 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-09-18 11:00:01.398866 | orchestrator | Thursday 18 September 2025 10:58:05 +0000 (0:00:00.643) 0:00:15.700 **** 2025-09-18 11:00:01.398878 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327726, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4496624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327726, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4496624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398920 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1327726, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4496624, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398933 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328214, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6053157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398946 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328214, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6053157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1328214, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6053157, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327742, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.451614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.398989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327742, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.451614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399032 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1327742, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.451614, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328216, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6064103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328216, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6064103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399075 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1328216, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6064103, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327811, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4817517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399099 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327811, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4817517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1327811, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4817517, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399135 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1328169, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6027536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1328169, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6027536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39556, 'inode': 1328169, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6027536, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399178 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327725, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.448119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327725, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.448119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1327725, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.448119, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399225 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327733, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4497511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399237 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327733, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4497511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399256 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1327733, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4497511, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399268 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327748, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4522173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399280 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327748, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4522173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1327748, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4522173, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399308 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327817, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399327 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327817, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399345 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1327817, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328211, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.604792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399369 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328211, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.604792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399381 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1328211, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.604792, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399397 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327736, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4510505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399416 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327736, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4510505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399428 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1327736, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4510505, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399447 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1328165, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.59308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399459 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1328165, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.59308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399471 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1328165, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.59308, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399482 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327813, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399505 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327813, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399517 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1327813, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4831438, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399535 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327809, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4807515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399548 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327809, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4807515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399559 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62676, 'inode': 1327809, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4807515, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327756, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4793706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327756, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4793706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399608 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327819, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.5925486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399627 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1327756, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4793706, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399638 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327819, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.5925486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399650 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327753, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4528236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1327819, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.5925486, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399673 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327753, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4528236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399699 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1328209, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6037538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1328209, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6037538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399793 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1327753, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.4528236, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399805 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328439, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6698058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399817 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328439, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6698058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399829 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1328209, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6037538, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399853 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328271, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328271, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399885 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1328439, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6698058, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399897 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328232, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6088712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399909 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328232, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6088712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399921 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1328271, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399937 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328293, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6239285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399963 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328293, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6239285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399975 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1328232, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6088712, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399987 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328223, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6069922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.399999 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328223, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6069922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400011 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 15725, 'inode': 1328293, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6239285, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400028 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328415, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.662167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400057 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328415, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.662167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400069 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/alertmanager-overview.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/alertmanager-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9645, 'inode': 1328223, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6069922, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400081 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328294, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6587546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400092 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328294, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6587546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400104 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1328415, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.662167, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328419, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6625257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400150 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328419, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6625257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400162 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1328294, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6587546, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328433, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6669717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400186 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328433, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6669717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400197 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus-remote-write.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus-remote-write.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 22317, 'inode': 1328419, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6625257, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328413, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6607547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400253 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328413, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6607547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400264 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1328433, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6669717, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400275 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328286, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.622467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328286, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.622467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400296 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/nodes.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/nodes.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21109, 'inode': 1328413, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6607547, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328247, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.617178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328247, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.617178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400346 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1328286, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.622467, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400357 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328285, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328285, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400378 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1328247, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.617178, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400389 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328235, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6115215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400410 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328235, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6115215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400430 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1328285, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6217647, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400441 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328287, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6236396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400451 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328287, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6236396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400462 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328429, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6647549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1328235, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6115215, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400497 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328429, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6647549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400513 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328423, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6642847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400524 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node-cluster-rsrc-use.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node-cluster-rsrc-use.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16098, 'inode': 1328287, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6236396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400534 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328423, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6642847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1328429, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6647549, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400561 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328225, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6075401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400571 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328225, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6075401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328230, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.607899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400672 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1328423, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6642847, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400682 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328230, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.607899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400693 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328407, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6597548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328407, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6597548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400741 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1328225, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6075401, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400762 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328421, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6627548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328421, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6627548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400784 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1328230, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.607899, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400794 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1328407, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6597548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400811 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 21898, 'inode': 1328421, 'dev': 103, 'nlink': 1, 'atime': 1758153730.0, 'mtime': 1758153730.0, 'ctime': 1758190046.6627548, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-09-18 11:00:01.400821 | orchestrator | 2025-09-18 11:00:01.400831 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-09-18 11:00:01.400842 | orchestrator | Thursday 18 September 2025 10:58:46 +0000 (0:00:40.374) 0:00:56.074 **** 2025-09-18 11:00:01.400852 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.400873 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.400884 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/grafana:2024.2', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-09-18 11:00:01.400894 | orchestrator | 2025-09-18 11:00:01.400904 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-09-18 11:00:01.400914 | orchestrator | Thursday 18 September 2025 10:58:47 +0000 (0:00:01.042) 0:00:57.117 **** 2025-09-18 11:00:01.400924 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:00:01.400934 | orchestrator | 2025-09-18 11:00:01.400943 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-09-18 11:00:01.400953 | orchestrator | Thursday 18 September 2025 10:58:49 +0000 (0:00:02.414) 0:00:59.531 **** 2025-09-18 11:00:01.400963 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:00:01.400973 | orchestrator | 2025-09-18 11:00:01.400982 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 11:00:01.400992 | orchestrator | Thursday 18 September 2025 10:58:51 +0000 (0:00:02.258) 0:01:01.790 **** 2025-09-18 11:00:01.401002 | orchestrator | 2025-09-18 11:00:01.401018 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 11:00:01.401027 | orchestrator | Thursday 18 September 2025 10:58:51 +0000 (0:00:00.062) 0:01:01.852 **** 2025-09-18 11:00:01.401037 | orchestrator | 2025-09-18 11:00:01.401047 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-09-18 11:00:01.401056 | orchestrator | Thursday 18 September 2025 10:58:51 +0000 (0:00:00.063) 0:01:01.916 **** 2025-09-18 11:00:01.401066 | orchestrator | 2025-09-18 11:00:01.401076 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-09-18 11:00:01.401086 | orchestrator | Thursday 18 September 2025 10:58:52 +0000 (0:00:00.265) 0:01:02.182 **** 2025-09-18 11:00:01.401095 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.401105 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.401115 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:00:01.401125 | orchestrator | 2025-09-18 11:00:01.401135 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-09-18 11:00:01.401144 | orchestrator | Thursday 18 September 2025 10:58:59 +0000 (0:00:06.924) 0:01:09.106 **** 2025-09-18 11:00:01.401154 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.401164 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.401174 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-09-18 11:00:01.401184 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-09-18 11:00:01.401194 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.401204 | orchestrator | 2025-09-18 11:00:01.401214 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-09-18 11:00:01.401223 | orchestrator | Thursday 18 September 2025 10:59:25 +0000 (0:00:26.668) 0:01:35.775 **** 2025-09-18 11:00:01.401233 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.401243 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:00:01.401252 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:00:01.401262 | orchestrator | 2025-09-18 11:00:01.401272 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-09-18 11:00:01.401281 | orchestrator | Thursday 18 September 2025 10:59:54 +0000 (0:00:28.763) 0:02:04.538 **** 2025-09-18 11:00:01.401291 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:00:01.401301 | orchestrator | 2025-09-18 11:00:01.401310 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-09-18 11:00:01.401320 | orchestrator | Thursday 18 September 2025 10:59:57 +0000 (0:00:02.534) 0:02:07.073 **** 2025-09-18 11:00:01.401329 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.401339 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:00:01.401349 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:00:01.401359 | orchestrator | 2025-09-18 11:00:01.401368 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-09-18 11:00:01.401378 | orchestrator | Thursday 18 September 2025 10:59:57 +0000 (0:00:00.536) 0:02:07.610 **** 2025-09-18 11:00:01.401389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-09-18 11:00:01.401408 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-09-18 11:00:01.401418 | orchestrator | 2025-09-18 11:00:01.401433 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-09-18 11:00:01.401443 | orchestrator | Thursday 18 September 2025 11:00:00 +0000 (0:00:02.521) 0:02:10.132 **** 2025-09-18 11:00:01.401453 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:00:01.401469 | orchestrator | 2025-09-18 11:00:01.401478 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 11:00:01.401489 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 11:00:01.401500 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 11:00:01.401509 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 11:00:01.401519 | orchestrator | 2025-09-18 11:00:01.401529 | orchestrator | 2025-09-18 11:00:01.401539 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 11:00:01.401548 | orchestrator | Thursday 18 September 2025 11:00:00 +0000 (0:00:00.275) 0:02:10.407 **** 2025-09-18 11:00:01.401558 | orchestrator | =============================================================================== 2025-09-18 11:00:01.401568 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 40.37s 2025-09-18 11:00:01.401578 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 28.76s 2025-09-18 11:00:01.401587 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 26.67s 2025-09-18 11:00:01.401597 | orchestrator | grafana : Restart first grafana container ------------------------------- 6.92s 2025-09-18 11:00:01.401606 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.53s 2025-09-18 11:00:01.401616 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.52s 2025-09-18 11:00:01.401626 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.41s 2025-09-18 11:00:01.401635 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.26s 2025-09-18 11:00:01.401645 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.62s 2025-09-18 11:00:01.401655 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.60s 2025-09-18 11:00:01.401665 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.47s 2025-09-18 11:00:01.401675 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.44s 2025-09-18 11:00:01.401684 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.43s 2025-09-18 11:00:01.401694 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.04s 2025-09-18 11:00:01.401719 | orchestrator | grafana : Check if extra configuration file exists ---------------------- 0.89s 2025-09-18 11:00:01.401729 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.85s 2025-09-18 11:00:01.401739 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.84s 2025-09-18 11:00:01.401748 | orchestrator | grafana : Find custom grafana dashboards -------------------------------- 0.76s 2025-09-18 11:00:01.401758 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.75s 2025-09-18 11:00:01.401768 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.71s 2025-09-18 11:00:01.401778 | orchestrator | 2025-09-18 11:00:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:04.448881 | orchestrator | 2025-09-18 11:00:04 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:04.449134 | orchestrator | 2025-09-18 11:00:04 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:04.449326 | orchestrator | 2025-09-18 11:00:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:07.484655 | orchestrator | 2025-09-18 11:00:07 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:07.486403 | orchestrator | 2025-09-18 11:00:07 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:07.486575 | orchestrator | 2025-09-18 11:00:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:10.530612 | orchestrator | 2025-09-18 11:00:10 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:10.530897 | orchestrator | 2025-09-18 11:00:10 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:10.530921 | orchestrator | 2025-09-18 11:00:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:13.575528 | orchestrator | 2025-09-18 11:00:13 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:13.577165 | orchestrator | 2025-09-18 11:00:13 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:13.577192 | orchestrator | 2025-09-18 11:00:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:16.620228 | orchestrator | 2025-09-18 11:00:16 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:16.621231 | orchestrator | 2025-09-18 11:00:16 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:16.621385 | orchestrator | 2025-09-18 11:00:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:19.658628 | orchestrator | 2025-09-18 11:00:19 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:19.659196 | orchestrator | 2025-09-18 11:00:19 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:19.659229 | orchestrator | 2025-09-18 11:00:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:22.702436 | orchestrator | 2025-09-18 11:00:22 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:22.703671 | orchestrator | 2025-09-18 11:00:22 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:22.703699 | orchestrator | 2025-09-18 11:00:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:25.743804 | orchestrator | 2025-09-18 11:00:25 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:25.744909 | orchestrator | 2025-09-18 11:00:25 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:25.744942 | orchestrator | 2025-09-18 11:00:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:28.786100 | orchestrator | 2025-09-18 11:00:28 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:28.787930 | orchestrator | 2025-09-18 11:00:28 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:28.787957 | orchestrator | 2025-09-18 11:00:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:31.830572 | orchestrator | 2025-09-18 11:00:31 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:31.831801 | orchestrator | 2025-09-18 11:00:31 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:31.832005 | orchestrator | 2025-09-18 11:00:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:34.899760 | orchestrator | 2025-09-18 11:00:34 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:34.902120 | orchestrator | 2025-09-18 11:00:34 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:34.902355 | orchestrator | 2025-09-18 11:00:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:37.944327 | orchestrator | 2025-09-18 11:00:37 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:37.946146 | orchestrator | 2025-09-18 11:00:37 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:37.946997 | orchestrator | 2025-09-18 11:00:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:40.982220 | orchestrator | 2025-09-18 11:00:40 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:40.983026 | orchestrator | 2025-09-18 11:00:40 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:40.983042 | orchestrator | 2025-09-18 11:00:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:44.032287 | orchestrator | 2025-09-18 11:00:44 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:44.032773 | orchestrator | 2025-09-18 11:00:44 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:44.032804 | orchestrator | 2025-09-18 11:00:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:47.082466 | orchestrator | 2025-09-18 11:00:47 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:47.084547 | orchestrator | 2025-09-18 11:00:47 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:47.084576 | orchestrator | 2025-09-18 11:00:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:50.132713 | orchestrator | 2025-09-18 11:00:50 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:50.133532 | orchestrator | 2025-09-18 11:00:50 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:50.133562 | orchestrator | 2025-09-18 11:00:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:53.173046 | orchestrator | 2025-09-18 11:00:53 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:53.175703 | orchestrator | 2025-09-18 11:00:53 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:53.175736 | orchestrator | 2025-09-18 11:00:53 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:56.214746 | orchestrator | 2025-09-18 11:00:56 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:56.214959 | orchestrator | 2025-09-18 11:00:56 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:56.214982 | orchestrator | 2025-09-18 11:00:56 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:00:59.257787 | orchestrator | 2025-09-18 11:00:59 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:00:59.257868 | orchestrator | 2025-09-18 11:00:59 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:00:59.257880 | orchestrator | 2025-09-18 11:00:59 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:02.293290 | orchestrator | 2025-09-18 11:01:02 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:02.293542 | orchestrator | 2025-09-18 11:01:02 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:02.293627 | orchestrator | 2025-09-18 11:01:02 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:05.331318 | orchestrator | 2025-09-18 11:01:05 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:05.332426 | orchestrator | 2025-09-18 11:01:05 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:05.332456 | orchestrator | 2025-09-18 11:01:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:08.375661 | orchestrator | 2025-09-18 11:01:08 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:08.375758 | orchestrator | 2025-09-18 11:01:08 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:08.375773 | orchestrator | 2025-09-18 11:01:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:11.406807 | orchestrator | 2025-09-18 11:01:11 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:11.407159 | orchestrator | 2025-09-18 11:01:11 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:11.407189 | orchestrator | 2025-09-18 11:01:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:14.449930 | orchestrator | 2025-09-18 11:01:14 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:14.450069 | orchestrator | 2025-09-18 11:01:14 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:14.450086 | orchestrator | 2025-09-18 11:01:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:17.489706 | orchestrator | 2025-09-18 11:01:17 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:17.490408 | orchestrator | 2025-09-18 11:01:17 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:17.490442 | orchestrator | 2025-09-18 11:01:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:20.532146 | orchestrator | 2025-09-18 11:01:20 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:20.533274 | orchestrator | 2025-09-18 11:01:20 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:20.533307 | orchestrator | 2025-09-18 11:01:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:23.575810 | orchestrator | 2025-09-18 11:01:23 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:23.577186 | orchestrator | 2025-09-18 11:01:23 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:23.577293 | orchestrator | 2025-09-18 11:01:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:26.623219 | orchestrator | 2025-09-18 11:01:26 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:26.624685 | orchestrator | 2025-09-18 11:01:26 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:26.624716 | orchestrator | 2025-09-18 11:01:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:29.667182 | orchestrator | 2025-09-18 11:01:29 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:29.668224 | orchestrator | 2025-09-18 11:01:29 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:29.668253 | orchestrator | 2025-09-18 11:01:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:32.712163 | orchestrator | 2025-09-18 11:01:32 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:32.712371 | orchestrator | 2025-09-18 11:01:32 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:32.712393 | orchestrator | 2025-09-18 11:01:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:35.761254 | orchestrator | 2025-09-18 11:01:35 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:35.763076 | orchestrator | 2025-09-18 11:01:35 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:35.763154 | orchestrator | 2025-09-18 11:01:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:38.808362 | orchestrator | 2025-09-18 11:01:38 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:38.809694 | orchestrator | 2025-09-18 11:01:38 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:38.809723 | orchestrator | 2025-09-18 11:01:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:41.860763 | orchestrator | 2025-09-18 11:01:41 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:41.860863 | orchestrator | 2025-09-18 11:01:41 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:41.860878 | orchestrator | 2025-09-18 11:01:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:44.902867 | orchestrator | 2025-09-18 11:01:44 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:44.904182 | orchestrator | 2025-09-18 11:01:44 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:44.904210 | orchestrator | 2025-09-18 11:01:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:47.946148 | orchestrator | 2025-09-18 11:01:47 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:47.946242 | orchestrator | 2025-09-18 11:01:47 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:47.946257 | orchestrator | 2025-09-18 11:01:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:50.990895 | orchestrator | 2025-09-18 11:01:50 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:50.995157 | orchestrator | 2025-09-18 11:01:50 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:50.995873 | orchestrator | 2025-09-18 11:01:50 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:54.038797 | orchestrator | 2025-09-18 11:01:54 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:54.041899 | orchestrator | 2025-09-18 11:01:54 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:54.041926 | orchestrator | 2025-09-18 11:01:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:01:57.073002 | orchestrator | 2025-09-18 11:01:57 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:01:57.075113 | orchestrator | 2025-09-18 11:01:57 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:01:57.075148 | orchestrator | 2025-09-18 11:01:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:00.110833 | orchestrator | 2025-09-18 11:02:00 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:00.111362 | orchestrator | 2025-09-18 11:02:00 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:00.111391 | orchestrator | 2025-09-18 11:02:00 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:03.152096 | orchestrator | 2025-09-18 11:02:03 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:03.153724 | orchestrator | 2025-09-18 11:02:03 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:03.153754 | orchestrator | 2025-09-18 11:02:03 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:06.205539 | orchestrator | 2025-09-18 11:02:06 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:06.206927 | orchestrator | 2025-09-18 11:02:06 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:06.207173 | orchestrator | 2025-09-18 11:02:06 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:09.243868 | orchestrator | 2025-09-18 11:02:09 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:09.244449 | orchestrator | 2025-09-18 11:02:09 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:09.244503 | orchestrator | 2025-09-18 11:02:09 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:12.287051 | orchestrator | 2025-09-18 11:02:12 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:12.288758 | orchestrator | 2025-09-18 11:02:12 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:12.289066 | orchestrator | 2025-09-18 11:02:12 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:15.340410 | orchestrator | 2025-09-18 11:02:15 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:15.343276 | orchestrator | 2025-09-18 11:02:15 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:15.343314 | orchestrator | 2025-09-18 11:02:15 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:18.377716 | orchestrator | 2025-09-18 11:02:18 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:18.379662 | orchestrator | 2025-09-18 11:02:18 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:18.380088 | orchestrator | 2025-09-18 11:02:18 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:21.428088 | orchestrator | 2025-09-18 11:02:21 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:21.429665 | orchestrator | 2025-09-18 11:02:21 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:21.430111 | orchestrator | 2025-09-18 11:02:21 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:24.479811 | orchestrator | 2025-09-18 11:02:24 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:24.482561 | orchestrator | 2025-09-18 11:02:24 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:24.482597 | orchestrator | 2025-09-18 11:02:24 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:27.533922 | orchestrator | 2025-09-18 11:02:27 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:27.535125 | orchestrator | 2025-09-18 11:02:27 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:27.535245 | orchestrator | 2025-09-18 11:02:27 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:30.579753 | orchestrator | 2025-09-18 11:02:30 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:30.580064 | orchestrator | 2025-09-18 11:02:30 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:30.580093 | orchestrator | 2025-09-18 11:02:30 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:33.628410 | orchestrator | 2025-09-18 11:02:33 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:33.631298 | orchestrator | 2025-09-18 11:02:33 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:33.631425 | orchestrator | 2025-09-18 11:02:33 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:36.674774 | orchestrator | 2025-09-18 11:02:36 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:36.674870 | orchestrator | 2025-09-18 11:02:36 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:36.674884 | orchestrator | 2025-09-18 11:02:36 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:39.717804 | orchestrator | 2025-09-18 11:02:39 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:39.719878 | orchestrator | 2025-09-18 11:02:39 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:39.720221 | orchestrator | 2025-09-18 11:02:39 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:42.764622 | orchestrator | 2025-09-18 11:02:42 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:42.766227 | orchestrator | 2025-09-18 11:02:42 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:42.766258 | orchestrator | 2025-09-18 11:02:42 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:45.806892 | orchestrator | 2025-09-18 11:02:45 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:45.810204 | orchestrator | 2025-09-18 11:02:45 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:45.810251 | orchestrator | 2025-09-18 11:02:45 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:48.849943 | orchestrator | 2025-09-18 11:02:48 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:48.850621 | orchestrator | 2025-09-18 11:02:48 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:48.850654 | orchestrator | 2025-09-18 11:02:48 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:51.872267 | orchestrator | 2025-09-18 11:02:51 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:51.873708 | orchestrator | 2025-09-18 11:02:51 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:51.873928 | orchestrator | 2025-09-18 11:02:51 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:54.911566 | orchestrator | 2025-09-18 11:02:54 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:54.913791 | orchestrator | 2025-09-18 11:02:54 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:54.914226 | orchestrator | 2025-09-18 11:02:54 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:02:57.949338 | orchestrator | 2025-09-18 11:02:57 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:02:57.950281 | orchestrator | 2025-09-18 11:02:57 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:02:57.950373 | orchestrator | 2025-09-18 11:02:57 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:00.999946 | orchestrator | 2025-09-18 11:03:01 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:01.003163 | orchestrator | 2025-09-18 11:03:01 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:01.003610 | orchestrator | 2025-09-18 11:03:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:04.048002 | orchestrator | 2025-09-18 11:03:04 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:04.048732 | orchestrator | 2025-09-18 11:03:04 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:04.048796 | orchestrator | 2025-09-18 11:03:04 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:07.079333 | orchestrator | 2025-09-18 11:03:07 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:07.079896 | orchestrator | 2025-09-18 11:03:07 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:07.079928 | orchestrator | 2025-09-18 11:03:07 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:10.131337 | orchestrator | 2025-09-18 11:03:10 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:10.132894 | orchestrator | 2025-09-18 11:03:10 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:10.132922 | orchestrator | 2025-09-18 11:03:10 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:13.169602 | orchestrator | 2025-09-18 11:03:13 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:13.169706 | orchestrator | 2025-09-18 11:03:13 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:13.169721 | orchestrator | 2025-09-18 11:03:13 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:16.214948 | orchestrator | 2025-09-18 11:03:16 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:16.215141 | orchestrator | 2025-09-18 11:03:16 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:16.215164 | orchestrator | 2025-09-18 11:03:16 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:19.258426 | orchestrator | 2025-09-18 11:03:19 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:19.260642 | orchestrator | 2025-09-18 11:03:19 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:19.260669 | orchestrator | 2025-09-18 11:03:19 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:22.302432 | orchestrator | 2025-09-18 11:03:22 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:22.303497 | orchestrator | 2025-09-18 11:03:22 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:22.304293 | orchestrator | 2025-09-18 11:03:22 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:25.347823 | orchestrator | 2025-09-18 11:03:25 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:25.348675 | orchestrator | 2025-09-18 11:03:25 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:25.348748 | orchestrator | 2025-09-18 11:03:25 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:28.395551 | orchestrator | 2025-09-18 11:03:28 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:28.397139 | orchestrator | 2025-09-18 11:03:28 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:28.397168 | orchestrator | 2025-09-18 11:03:28 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:31.442106 | orchestrator | 2025-09-18 11:03:31 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:31.442884 | orchestrator | 2025-09-18 11:03:31 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:31.442910 | orchestrator | 2025-09-18 11:03:31 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:34.488701 | orchestrator | 2025-09-18 11:03:34 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:34.488889 | orchestrator | 2025-09-18 11:03:34 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:34.489392 | orchestrator | 2025-09-18 11:03:34 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:37.538958 | orchestrator | 2025-09-18 11:03:37 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:37.540009 | orchestrator | 2025-09-18 11:03:37 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:37.540040 | orchestrator | 2025-09-18 11:03:37 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:40.565892 | orchestrator | 2025-09-18 11:03:40 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:40.566127 | orchestrator | 2025-09-18 11:03:40 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:40.566151 | orchestrator | 2025-09-18 11:03:40 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:43.611721 | orchestrator | 2025-09-18 11:03:43 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:43.611899 | orchestrator | 2025-09-18 11:03:43 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:43.612954 | orchestrator | 2025-09-18 11:03:43 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:46.752341 | orchestrator | 2025-09-18 11:03:46 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:46.752697 | orchestrator | 2025-09-18 11:03:46 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:46.752723 | orchestrator | 2025-09-18 11:03:46 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:49.794527 | orchestrator | 2025-09-18 11:03:49 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:49.795654 | orchestrator | 2025-09-18 11:03:49 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:49.795693 | orchestrator | 2025-09-18 11:03:49 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:52.841939 | orchestrator | 2025-09-18 11:03:52 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:52.843941 | orchestrator | 2025-09-18 11:03:52 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:52.845396 | orchestrator | 2025-09-18 11:03:52 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:55.891318 | orchestrator | 2025-09-18 11:03:55 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:55.892654 | orchestrator | 2025-09-18 11:03:55 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:55.892726 | orchestrator | 2025-09-18 11:03:55 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:03:58.938737 | orchestrator | 2025-09-18 11:03:58 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:03:58.939109 | orchestrator | 2025-09-18 11:03:58 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:03:58.939608 | orchestrator | 2025-09-18 11:03:58 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:01.994520 | orchestrator | 2025-09-18 11:04:01 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:01.995912 | orchestrator | 2025-09-18 11:04:01 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:01.996801 | orchestrator | 2025-09-18 11:04:01 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:05.043663 | orchestrator | 2025-09-18 11:04:05 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:05.046055 | orchestrator | 2025-09-18 11:04:05 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:05.046138 | orchestrator | 2025-09-18 11:04:05 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:08.094462 | orchestrator | 2025-09-18 11:04:08 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:08.097317 | orchestrator | 2025-09-18 11:04:08 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:08.097361 | orchestrator | 2025-09-18 11:04:08 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:11.145018 | orchestrator | 2025-09-18 11:04:11 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:11.146890 | orchestrator | 2025-09-18 11:04:11 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:11.146921 | orchestrator | 2025-09-18 11:04:11 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:14.207323 | orchestrator | 2025-09-18 11:04:14 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:14.207496 | orchestrator | 2025-09-18 11:04:14 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:14.207513 | orchestrator | 2025-09-18 11:04:14 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:17.251860 | orchestrator | 2025-09-18 11:04:17 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state STARTED 2025-09-18 11:04:17.253484 | orchestrator | 2025-09-18 11:04:17 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:17.253677 | orchestrator | 2025-09-18 11:04:17 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:20.299528 | orchestrator | 2025-09-18 11:04:20 | INFO  | Task f1efd46a-079a-4763-954b-2b105648267f is in state SUCCESS 2025-09-18 11:04:20.301651 | orchestrator | 2025-09-18 11:04:20.301688 | orchestrator | 2025-09-18 11:04:20.301701 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 11:04:20.301714 | orchestrator | 2025-09-18 11:04:20.301725 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-09-18 11:04:20.301737 | orchestrator | Thursday 18 September 2025 10:55:49 +0000 (0:00:00.260) 0:00:00.260 **** 2025-09-18 11:04:20.301749 | orchestrator | changed: [testbed-manager] 2025-09-18 11:04:20.301762 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.301773 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.301784 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.301796 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.301807 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.301818 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.301829 | orchestrator | 2025-09-18 11:04:20.301840 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 11:04:20.301851 | orchestrator | Thursday 18 September 2025 10:55:49 +0000 (0:00:00.723) 0:00:00.983 **** 2025-09-18 11:04:20.301863 | orchestrator | changed: [testbed-manager] 2025-09-18 11:04:20.301874 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.301885 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.301896 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.301907 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.301918 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.301930 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.301941 | orchestrator | 2025-09-18 11:04:20.301952 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 11:04:20.301963 | orchestrator | Thursday 18 September 2025 10:55:50 +0000 (0:00:00.599) 0:00:01.583 **** 2025-09-18 11:04:20.302000 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-09-18 11:04:20.302012 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-09-18 11:04:20.302076 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-09-18 11:04:20.302088 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-09-18 11:04:20.302114 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-09-18 11:04:20.302126 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-09-18 11:04:20.302136 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-09-18 11:04:20.302147 | orchestrator | 2025-09-18 11:04:20.302158 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-09-18 11:04:20.302169 | orchestrator | 2025-09-18 11:04:20.302180 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-18 11:04:20.302191 | orchestrator | Thursday 18 September 2025 10:55:51 +0000 (0:00:00.738) 0:00:02.321 **** 2025-09-18 11:04:20.302202 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.302213 | orchestrator | 2025-09-18 11:04:20.302224 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-09-18 11:04:20.302235 | orchestrator | Thursday 18 September 2025 10:55:52 +0000 (0:00:00.679) 0:00:03.001 **** 2025-09-18 11:04:20.302246 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-09-18 11:04:20.302260 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-09-18 11:04:20.302272 | orchestrator | 2025-09-18 11:04:20.302284 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-09-18 11:04:20.302296 | orchestrator | Thursday 18 September 2025 10:55:56 +0000 (0:00:04.380) 0:00:07.381 **** 2025-09-18 11:04:20.302309 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 11:04:20.302343 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-09-18 11:04:20.302356 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.302368 | orchestrator | 2025-09-18 11:04:20.302381 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-18 11:04:20.302393 | orchestrator | Thursday 18 September 2025 10:56:00 +0000 (0:00:04.310) 0:00:11.692 **** 2025-09-18 11:04:20.302405 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.302417 | orchestrator | 2025-09-18 11:04:20.302429 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-09-18 11:04:20.302442 | orchestrator | Thursday 18 September 2025 10:56:01 +0000 (0:00:00.652) 0:00:12.345 **** 2025-09-18 11:04:20.302454 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.302466 | orchestrator | 2025-09-18 11:04:20.302478 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-09-18 11:04:20.302491 | orchestrator | Thursday 18 September 2025 10:56:02 +0000 (0:00:01.458) 0:00:13.803 **** 2025-09-18 11:04:20.302503 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.302515 | orchestrator | 2025-09-18 11:04:20.302527 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 11:04:20.302540 | orchestrator | Thursday 18 September 2025 10:56:05 +0000 (0:00:02.873) 0:00:16.676 **** 2025-09-18 11:04:20.302552 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.302564 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.302576 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.302588 | orchestrator | 2025-09-18 11:04:20.302600 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-18 11:04:20.302612 | orchestrator | Thursday 18 September 2025 10:56:06 +0000 (0:00:00.406) 0:00:17.083 **** 2025-09-18 11:04:20.302623 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.302634 | orchestrator | 2025-09-18 11:04:20.302645 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-09-18 11:04:20.302656 | orchestrator | Thursday 18 September 2025 10:56:37 +0000 (0:00:31.058) 0:00:48.142 **** 2025-09-18 11:04:20.302667 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.302688 | orchestrator | 2025-09-18 11:04:20.302700 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 11:04:20.302710 | orchestrator | Thursday 18 September 2025 10:56:51 +0000 (0:00:14.495) 0:01:02.637 **** 2025-09-18 11:04:20.302721 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.302732 | orchestrator | 2025-09-18 11:04:20.302743 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 11:04:20.302754 | orchestrator | Thursday 18 September 2025 10:57:04 +0000 (0:00:12.880) 0:01:15.518 **** 2025-09-18 11:04:20.302780 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.302791 | orchestrator | 2025-09-18 11:04:20.302802 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-09-18 11:04:20.302813 | orchestrator | Thursday 18 September 2025 10:57:05 +0000 (0:00:01.169) 0:01:16.688 **** 2025-09-18 11:04:20.302824 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.302835 | orchestrator | 2025-09-18 11:04:20.302846 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 11:04:20.302857 | orchestrator | Thursday 18 September 2025 10:57:06 +0000 (0:00:00.511) 0:01:17.199 **** 2025-09-18 11:04:20.302868 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.302879 | orchestrator | 2025-09-18 11:04:20.302890 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-09-18 11:04:20.302901 | orchestrator | Thursday 18 September 2025 10:57:06 +0000 (0:00:00.522) 0:01:17.722 **** 2025-09-18 11:04:20.302912 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.302923 | orchestrator | 2025-09-18 11:04:20.302934 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-18 11:04:20.302945 | orchestrator | Thursday 18 September 2025 10:57:24 +0000 (0:00:17.822) 0:01:35.545 **** 2025-09-18 11:04:20.302956 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.302967 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.302978 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.302988 | orchestrator | 2025-09-18 11:04:20.302999 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-09-18 11:04:20.303010 | orchestrator | 2025-09-18 11:04:20.303021 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-09-18 11:04:20.303032 | orchestrator | Thursday 18 September 2025 10:57:24 +0000 (0:00:00.310) 0:01:35.855 **** 2025-09-18 11:04:20.303043 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.303054 | orchestrator | 2025-09-18 11:04:20.303070 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-09-18 11:04:20.303081 | orchestrator | Thursday 18 September 2025 10:57:25 +0000 (0:00:00.554) 0:01:36.410 **** 2025-09-18 11:04:20.303092 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303103 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303114 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.303125 | orchestrator | 2025-09-18 11:04:20.303136 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-09-18 11:04:20.303147 | orchestrator | Thursday 18 September 2025 10:57:27 +0000 (0:00:02.283) 0:01:38.693 **** 2025-09-18 11:04:20.303158 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303169 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303180 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.303191 | orchestrator | 2025-09-18 11:04:20.303202 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-18 11:04:20.303213 | orchestrator | Thursday 18 September 2025 10:57:30 +0000 (0:00:02.356) 0:01:41.050 **** 2025-09-18 11:04:20.303224 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.303236 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303246 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303257 | orchestrator | 2025-09-18 11:04:20.303268 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-18 11:04:20.303279 | orchestrator | Thursday 18 September 2025 10:57:30 +0000 (0:00:00.304) 0:01:41.354 **** 2025-09-18 11:04:20.303297 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 11:04:20.303308 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303337 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 11:04:20.303349 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303360 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-09-18 11:04:20.303371 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-09-18 11:04:20.303382 | orchestrator | 2025-09-18 11:04:20.303394 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-09-18 11:04:20.303405 | orchestrator | Thursday 18 September 2025 10:57:38 +0000 (0:00:08.433) 0:01:49.788 **** 2025-09-18 11:04:20.303416 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.303427 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303438 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303449 | orchestrator | 2025-09-18 11:04:20.303460 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-09-18 11:04:20.303471 | orchestrator | Thursday 18 September 2025 10:57:39 +0000 (0:00:00.358) 0:01:50.147 **** 2025-09-18 11:04:20.303482 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-09-18 11:04:20.303493 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.303504 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-09-18 11:04:20.303515 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303526 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-09-18 11:04:20.303537 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303548 | orchestrator | 2025-09-18 11:04:20.303559 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-18 11:04:20.303570 | orchestrator | Thursday 18 September 2025 10:57:39 +0000 (0:00:00.646) 0:01:50.793 **** 2025-09-18 11:04:20.303581 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303592 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303603 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.303614 | orchestrator | 2025-09-18 11:04:20.303626 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-09-18 11:04:20.303637 | orchestrator | Thursday 18 September 2025 10:57:40 +0000 (0:00:00.522) 0:01:51.316 **** 2025-09-18 11:04:20.303648 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303659 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303670 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.303681 | orchestrator | 2025-09-18 11:04:20.303692 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-09-18 11:04:20.303704 | orchestrator | Thursday 18 September 2025 10:57:41 +0000 (0:00:01.019) 0:01:52.335 **** 2025-09-18 11:04:20.303715 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303726 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303744 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.303755 | orchestrator | 2025-09-18 11:04:20.303766 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-09-18 11:04:20.303777 | orchestrator | Thursday 18 September 2025 10:57:43 +0000 (0:00:02.146) 0:01:54.482 **** 2025-09-18 11:04:20.303788 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303798 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303810 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.303821 | orchestrator | 2025-09-18 11:04:20.303832 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 11:04:20.303843 | orchestrator | Thursday 18 September 2025 10:58:04 +0000 (0:00:21.356) 0:02:15.838 **** 2025-09-18 11:04:20.303854 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303864 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303875 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.303886 | orchestrator | 2025-09-18 11:04:20.303897 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 11:04:20.303915 | orchestrator | Thursday 18 September 2025 10:58:17 +0000 (0:00:12.934) 0:02:28.772 **** 2025-09-18 11:04:20.303926 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.303936 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.303947 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.303958 | orchestrator | 2025-09-18 11:04:20.303969 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-09-18 11:04:20.303980 | orchestrator | Thursday 18 September 2025 10:58:18 +0000 (0:00:01.220) 0:02:29.993 **** 2025-09-18 11:04:20.303991 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.304001 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.304012 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.304023 | orchestrator | 2025-09-18 11:04:20.304034 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-09-18 11:04:20.304044 | orchestrator | Thursday 18 September 2025 10:58:31 +0000 (0:00:12.423) 0:02:42.416 **** 2025-09-18 11:04:20.304055 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.304066 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.304083 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.304094 | orchestrator | 2025-09-18 11:04:20.304105 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-09-18 11:04:20.304116 | orchestrator | Thursday 18 September 2025 10:58:32 +0000 (0:00:01.220) 0:02:43.636 **** 2025-09-18 11:04:20.304127 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.304138 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.304149 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.304159 | orchestrator | 2025-09-18 11:04:20.304170 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-09-18 11:04:20.304181 | orchestrator | 2025-09-18 11:04:20.304192 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 11:04:20.304203 | orchestrator | Thursday 18 September 2025 10:58:33 +0000 (0:00:00.601) 0:02:44.238 **** 2025-09-18 11:04:20.304213 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.304225 | orchestrator | 2025-09-18 11:04:20.304236 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-09-18 11:04:20.304247 | orchestrator | Thursday 18 September 2025 10:58:33 +0000 (0:00:00.604) 0:02:44.842 **** 2025-09-18 11:04:20.304258 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-09-18 11:04:20.304269 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-09-18 11:04:20.304280 | orchestrator | 2025-09-18 11:04:20.304290 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-09-18 11:04:20.304301 | orchestrator | Thursday 18 September 2025 10:58:37 +0000 (0:00:03.730) 0:02:48.573 **** 2025-09-18 11:04:20.304312 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-09-18 11:04:20.304344 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-09-18 11:04:20.304356 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-09-18 11:04:20.304367 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-09-18 11:04:20.304378 | orchestrator | 2025-09-18 11:04:20.304389 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-09-18 11:04:20.304400 | orchestrator | Thursday 18 September 2025 10:58:44 +0000 (0:00:06.813) 0:02:55.387 **** 2025-09-18 11:04:20.304411 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 11:04:20.304422 | orchestrator | 2025-09-18 11:04:20.304432 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-09-18 11:04:20.304443 | orchestrator | Thursday 18 September 2025 10:58:47 +0000 (0:00:03.346) 0:02:58.734 **** 2025-09-18 11:04:20.304454 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 11:04:20.304472 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-09-18 11:04:20.304483 | orchestrator | 2025-09-18 11:04:20.304494 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-09-18 11:04:20.304504 | orchestrator | Thursday 18 September 2025 10:58:51 +0000 (0:00:03.909) 0:03:02.643 **** 2025-09-18 11:04:20.304515 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 11:04:20.304526 | orchestrator | 2025-09-18 11:04:20.304537 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-09-18 11:04:20.304548 | orchestrator | Thursday 18 September 2025 10:58:55 +0000 (0:00:03.518) 0:03:06.162 **** 2025-09-18 11:04:20.304559 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-09-18 11:04:20.304569 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-09-18 11:04:20.304580 | orchestrator | 2025-09-18 11:04:20.304591 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-09-18 11:04:20.304608 | orchestrator | Thursday 18 September 2025 10:59:02 +0000 (0:00:07.815) 0:03:13.977 **** 2025-09-18 11:04:20.304626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.304648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.304663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.304690 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.304704 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.304721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.304732 | orchestrator | 2025-09-18 11:04:20.304744 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-09-18 11:04:20.304755 | orchestrator | Thursday 18 September 2025 10:59:04 +0000 (0:00:01.370) 0:03:15.348 **** 2025-09-18 11:04:20.304766 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.304777 | orchestrator | 2025-09-18 11:04:20.304788 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-09-18 11:04:20.304799 | orchestrator | Thursday 18 September 2025 10:59:04 +0000 (0:00:00.138) 0:03:15.487 **** 2025-09-18 11:04:20.304810 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.304821 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.304832 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.304843 | orchestrator | 2025-09-18 11:04:20.304854 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-09-18 11:04:20.304865 | orchestrator | Thursday 18 September 2025 10:59:04 +0000 (0:00:00.316) 0:03:15.803 **** 2025-09-18 11:04:20.304876 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-09-18 11:04:20.304886 | orchestrator | 2025-09-18 11:04:20.304897 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-09-18 11:04:20.304908 | orchestrator | Thursday 18 September 2025 10:59:05 +0000 (0:00:00.957) 0:03:16.760 **** 2025-09-18 11:04:20.304919 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.304937 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.304948 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.304959 | orchestrator | 2025-09-18 11:04:20.304970 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-09-18 11:04:20.304981 | orchestrator | Thursday 18 September 2025 10:59:06 +0000 (0:00:00.350) 0:03:17.111 **** 2025-09-18 11:04:20.304992 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.305003 | orchestrator | 2025-09-18 11:04:20.305014 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-18 11:04:20.305025 | orchestrator | Thursday 18 September 2025 10:59:06 +0000 (0:00:00.545) 0:03:17.656 **** 2025-09-18 11:04:20.305037 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.305058 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.305073 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.305126 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.305140 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.305159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.305170 | orchestrator | 2025-09-18 11:04:20.305182 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-18 11:04:20.305193 | orchestrator | Thursday 18 September 2025 10:59:09 +0000 (0:00:02.575) 0:03:20.232 **** 2025-09-18 11:04:20.305216 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.305228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.305247 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.305259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.305271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.305283 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.305303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.305366 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.305389 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.305400 | orchestrator | 2025-09-18 11:04:20.305411 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-18 11:04:20.305423 | orchestrator | Thursday 18 September 2025 10:59:10 +0000 (0:00:00.890) 0:03:21.123 **** 2025-09-18 11:04:20.305435 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.305447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.305459 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.306456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.306498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.306522 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.306535 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.306547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.306558 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.306570 | orchestrator | 2025-09-18 11:04:20.306581 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-09-18 11:04:20.306592 | orchestrator | Thursday 18 September 2025 10:59:10 +0000 (0:00:00.873) 0:03:21.996 **** 2025-09-18 11:04:20.306614 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306653 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306711 | orchestrator | 2025-09-18 11:04:20.306722 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-09-18 11:04:20.306733 | orchestrator | Thursday 18 September 2025 10:59:13 +0000 (0:00:02.601) 0:03:24.598 **** 2025-09-18 11:04:20.306756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306769 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.306801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306824 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306836 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.306848 | orchestrator | 2025-09-18 11:04:20.306859 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-09-18 11:04:20.306870 | orchestrator | Thursday 18 September 2025 10:59:19 +0000 (0:00:06.166) 0:03:30.765 **** 2025-09-18 11:04:20.306882 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.306899 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.306911 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.306923 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.306946 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.306958 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.306969 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-09-18 11:04:20.306981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.306993 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.307004 | orchestrator | 2025-09-18 11:04:20.307015 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-09-18 11:04:20.307026 | orchestrator | Thursday 18 September 2025 10:59:20 +0000 (0:00:00.639) 0:03:31.404 **** 2025-09-18 11:04:20.307038 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.307050 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.307062 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.307076 | orchestrator | 2025-09-18 11:04:20.307093 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-09-18 11:04:20.307112 | orchestrator | Thursday 18 September 2025 10:59:22 +0000 (0:00:01.634) 0:03:33.039 **** 2025-09-18 11:04:20.307125 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.307138 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.307150 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.307163 | orchestrator | 2025-09-18 11:04:20.307175 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-09-18 11:04:20.307187 | orchestrator | Thursday 18 September 2025 10:59:22 +0000 (0:00:00.345) 0:03:33.384 **** 2025-09-18 11:04:20.307210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.307225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.307246 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/nova-api:2024.2', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:20.307267 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.307282 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.307301 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/nova-scheduler:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.307314 | orchestrator | 2025-09-18 11:04:20.307345 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 11:04:20.307359 | orchestrator | Thursday 18 September 2025 10:59:24 +0000 (0:00:02.229) 0:03:35.614 **** 2025-09-18 11:04:20.307372 | orchestrator | 2025-09-18 11:04:20.307385 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 11:04:20.307396 | orchestrator | Thursday 18 September 2025 10:59:24 +0000 (0:00:00.131) 0:03:35.745 **** 2025-09-18 11:04:20.307407 | orchestrator | 2025-09-18 11:04:20.307418 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-09-18 11:04:20.307429 | orchestrator | Thursday 18 September 2025 10:59:24 +0000 (0:00:00.134) 0:03:35.880 **** 2025-09-18 11:04:20.307440 | orchestrator | 2025-09-18 11:04:20.307451 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-09-18 11:04:20.307462 | orchestrator | Thursday 18 September 2025 10:59:25 +0000 (0:00:00.133) 0:03:36.013 **** 2025-09-18 11:04:20.307472 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.307484 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.307495 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.307505 | orchestrator | 2025-09-18 11:04:20.307516 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-09-18 11:04:20.307527 | orchestrator | Thursday 18 September 2025 10:59:48 +0000 (0:00:23.675) 0:03:59.689 **** 2025-09-18 11:04:20.307538 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.307549 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.307560 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.307571 | orchestrator | 2025-09-18 11:04:20.307582 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-09-18 11:04:20.307593 | orchestrator | 2025-09-18 11:04:20.307604 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 11:04:20.307615 | orchestrator | Thursday 18 September 2025 10:59:55 +0000 (0:00:06.335) 0:04:06.024 **** 2025-09-18 11:04:20.307632 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.307644 | orchestrator | 2025-09-18 11:04:20.307655 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 11:04:20.307666 | orchestrator | Thursday 18 September 2025 10:59:56 +0000 (0:00:01.309) 0:04:07.334 **** 2025-09-18 11:04:20.307677 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.307688 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.307698 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.307709 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.307720 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.307730 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.307741 | orchestrator | 2025-09-18 11:04:20.307752 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-09-18 11:04:20.307763 | orchestrator | Thursday 18 September 2025 10:59:56 +0000 (0:00:00.592) 0:04:07.926 **** 2025-09-18 11:04:20.307774 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.307785 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.307796 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.307807 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 11:04:20.307818 | orchestrator | 2025-09-18 11:04:20.307829 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-09-18 11:04:20.307846 | orchestrator | Thursday 18 September 2025 10:59:57 +0000 (0:00:01.000) 0:04:08.927 **** 2025-09-18 11:04:20.307857 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-09-18 11:04:20.307868 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-09-18 11:04:20.307879 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-09-18 11:04:20.307890 | orchestrator | 2025-09-18 11:04:20.307901 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-09-18 11:04:20.307912 | orchestrator | Thursday 18 September 2025 10:59:58 +0000 (0:00:00.674) 0:04:09.601 **** 2025-09-18 11:04:20.307923 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-09-18 11:04:20.307934 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-09-18 11:04:20.307945 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-09-18 11:04:20.307956 | orchestrator | 2025-09-18 11:04:20.307967 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-09-18 11:04:20.307978 | orchestrator | Thursday 18 September 2025 10:59:59 +0000 (0:00:01.199) 0:04:10.801 **** 2025-09-18 11:04:20.307989 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-09-18 11:04:20.308000 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.308011 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-09-18 11:04:20.308022 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.308032 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-09-18 11:04:20.308043 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.308054 | orchestrator | 2025-09-18 11:04:20.308065 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-09-18 11:04:20.308076 | orchestrator | Thursday 18 September 2025 11:00:00 +0000 (0:00:00.743) 0:04:11.545 **** 2025-09-18 11:04:20.308087 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 11:04:20.308098 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 11:04:20.308114 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.308125 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 11:04:20.308136 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 11:04:20.308147 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.308158 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-09-18 11:04:20.308177 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 11:04:20.308188 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-09-18 11:04:20.308199 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.308210 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 11:04:20.308221 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 11:04:20.308232 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 11:04:20.308243 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-09-18 11:04:20.308254 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-09-18 11:04:20.308265 | orchestrator | 2025-09-18 11:04:20.308276 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-09-18 11:04:20.308287 | orchestrator | Thursday 18 September 2025 11:00:02 +0000 (0:00:02.113) 0:04:13.659 **** 2025-09-18 11:04:20.308298 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.308309 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.308380 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.308393 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.308404 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.308415 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.308426 | orchestrator | 2025-09-18 11:04:20.308437 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-09-18 11:04:20.308447 | orchestrator | Thursday 18 September 2025 11:00:04 +0000 (0:00:01.479) 0:04:15.138 **** 2025-09-18 11:04:20.308457 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.308467 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.308477 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.308486 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.308496 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.308505 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.308515 | orchestrator | 2025-09-18 11:04:20.308525 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-09-18 11:04:20.308534 | orchestrator | Thursday 18 September 2025 11:00:05 +0000 (0:00:01.670) 0:04:16.808 **** 2025-09-18 11:04:20.308545 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308565 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308610 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308630 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308662 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308677 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308688 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308699 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308726 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308752 | orchestrator | 2025-09-18 11:04:20.308762 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 11:04:20.308772 | orchestrator | Thursday 18 September 2025 11:00:08 +0000 (0:00:02.517) 0:04:19.327 **** 2025-09-18 11:04:20.308786 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:20.308796 | orchestrator | 2025-09-18 11:04:20.308806 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-09-18 11:04:20.308816 | orchestrator | Thursday 18 September 2025 11:00:09 +0000 (0:00:01.228) 0:04:20.555 **** 2025-09-18 11:04:20.308826 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308837 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308853 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308869 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308884 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308915 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308926 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.308997 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.309007 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.309017 | orchestrator | 2025-09-18 11:04:20.309027 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-09-18 11:04:20.309043 | orchestrator | Thursday 18 September 2025 11:00:13 +0000 (0:00:03.713) 0:04:24.269 **** 2025-09-18 11:04:20.309059 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309070 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309085 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309095 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.309105 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309116 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309141 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309151 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.309161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309176 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309186 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309197 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.309207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309233 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.309250 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309271 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.309285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309296 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309306 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.309316 | orchestrator | 2025-09-18 11:04:20.309340 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-09-18 11:04:20.309351 | orchestrator | Thursday 18 September 2025 11:00:15 +0000 (0:00:01.797) 0:04:26.067 **** 2025-09-18 11:04:20.309361 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309382 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309472 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.309483 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309509 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309519 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.309530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.309552 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.309563 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309573 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.309587 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309597 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309607 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.309618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309634 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309645 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.309655 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.309671 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.309681 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.309690 | orchestrator | 2025-09-18 11:04:20.309700 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 11:04:20.309710 | orchestrator | Thursday 18 September 2025 11:00:17 +0000 (0:00:02.284) 0:04:28.352 **** 2025-09-18 11:04:20.309720 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.309730 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.309740 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.309750 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-09-18 11:04:20.309760 | orchestrator | 2025-09-18 11:04:20.309769 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-09-18 11:04:20.309780 | orchestrator | Thursday 18 September 2025 11:00:18 +0000 (0:00:01.106) 0:04:29.458 **** 2025-09-18 11:04:20.309789 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 11:04:20.309799 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 11:04:20.309809 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 11:04:20.309819 | orchestrator | 2025-09-18 11:04:20.309828 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-09-18 11:04:20.309842 | orchestrator | Thursday 18 September 2025 11:00:19 +0000 (0:00:00.917) 0:04:30.376 **** 2025-09-18 11:04:20.309852 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 11:04:20.309862 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-09-18 11:04:20.309872 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-09-18 11:04:20.309881 | orchestrator | 2025-09-18 11:04:20.309891 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-09-18 11:04:20.309901 | orchestrator | Thursday 18 September 2025 11:00:20 +0000 (0:00:00.953) 0:04:31.330 **** 2025-09-18 11:04:20.309910 | orchestrator | ok: [testbed-node-3] 2025-09-18 11:04:20.309920 | orchestrator | ok: [testbed-node-4] 2025-09-18 11:04:20.309935 | orchestrator | ok: [testbed-node-5] 2025-09-18 11:04:20.309945 | orchestrator | 2025-09-18 11:04:20.309955 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-09-18 11:04:20.309965 | orchestrator | Thursday 18 September 2025 11:00:20 +0000 (0:00:00.531) 0:04:31.862 **** 2025-09-18 11:04:20.309974 | orchestrator | ok: [testbed-node-3] 2025-09-18 11:04:20.309984 | orchestrator | ok: [testbed-node-4] 2025-09-18 11:04:20.309994 | orchestrator | ok: [testbed-node-5] 2025-09-18 11:04:20.310003 | orchestrator | 2025-09-18 11:04:20.310013 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-09-18 11:04:20.310065 | orchestrator | Thursday 18 September 2025 11:00:21 +0000 (0:00:00.809) 0:04:32.671 **** 2025-09-18 11:04:20.310076 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 11:04:20.310086 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 11:04:20.310096 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 11:04:20.310106 | orchestrator | 2025-09-18 11:04:20.310115 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-09-18 11:04:20.310125 | orchestrator | Thursday 18 September 2025 11:00:22 +0000 (0:00:01.246) 0:04:33.917 **** 2025-09-18 11:04:20.310135 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 11:04:20.310144 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 11:04:20.310154 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 11:04:20.310164 | orchestrator | 2025-09-18 11:04:20.310173 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-09-18 11:04:20.310183 | orchestrator | Thursday 18 September 2025 11:00:24 +0000 (0:00:01.227) 0:04:35.144 **** 2025-09-18 11:04:20.310193 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-09-18 11:04:20.310203 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-09-18 11:04:20.310213 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-09-18 11:04:20.310222 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-09-18 11:04:20.310232 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-09-18 11:04:20.310241 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-09-18 11:04:20.310251 | orchestrator | 2025-09-18 11:04:20.310260 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-09-18 11:04:20.310270 | orchestrator | Thursday 18 September 2025 11:00:28 +0000 (0:00:03.891) 0:04:39.036 **** 2025-09-18 11:04:20.310280 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.310289 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.310299 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.310309 | orchestrator | 2025-09-18 11:04:20.310318 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-09-18 11:04:20.310343 | orchestrator | Thursday 18 September 2025 11:00:28 +0000 (0:00:00.520) 0:04:39.557 **** 2025-09-18 11:04:20.310353 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.310363 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.310372 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.310382 | orchestrator | 2025-09-18 11:04:20.310392 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-09-18 11:04:20.310402 | orchestrator | Thursday 18 September 2025 11:00:28 +0000 (0:00:00.316) 0:04:39.873 **** 2025-09-18 11:04:20.310411 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.310421 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.310431 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.310441 | orchestrator | 2025-09-18 11:04:20.310467 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-09-18 11:04:20.310478 | orchestrator | Thursday 18 September 2025 11:00:30 +0000 (0:00:01.197) 0:04:41.070 **** 2025-09-18 11:04:20.310488 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 11:04:20.310507 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 11:04:20.310517 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-09-18 11:04:20.310527 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 11:04:20.310538 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 11:04:20.310547 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-09-18 11:04:20.310557 | orchestrator | 2025-09-18 11:04:20.310567 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-09-18 11:04:20.310577 | orchestrator | Thursday 18 September 2025 11:00:33 +0000 (0:00:03.462) 0:04:44.533 **** 2025-09-18 11:04:20.310586 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 11:04:20.310596 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 11:04:20.310606 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 11:04:20.310624 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-09-18 11:04:20.310634 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.310644 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-09-18 11:04:20.310654 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.310663 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-09-18 11:04:20.310673 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.310683 | orchestrator | 2025-09-18 11:04:20.310693 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-09-18 11:04:20.310703 | orchestrator | Thursday 18 September 2025 11:00:37 +0000 (0:00:03.482) 0:04:48.015 **** 2025-09-18 11:04:20.310712 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.310722 | orchestrator | 2025-09-18 11:04:20.310732 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-09-18 11:04:20.310741 | orchestrator | Thursday 18 September 2025 11:00:37 +0000 (0:00:00.149) 0:04:48.165 **** 2025-09-18 11:04:20.310751 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.310761 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.310770 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.310780 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.310790 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.310799 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.310809 | orchestrator | 2025-09-18 11:04:20.310819 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-09-18 11:04:20.310829 | orchestrator | Thursday 18 September 2025 11:00:37 +0000 (0:00:00.577) 0:04:48.743 **** 2025-09-18 11:04:20.310838 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-09-18 11:04:20.310848 | orchestrator | 2025-09-18 11:04:20.310858 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-09-18 11:04:20.310868 | orchestrator | Thursday 18 September 2025 11:00:38 +0000 (0:00:00.675) 0:04:49.419 **** 2025-09-18 11:04:20.310877 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.310887 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.310897 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.310906 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.310916 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.310925 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.310935 | orchestrator | 2025-09-18 11:04:20.310945 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-09-18 11:04:20.310955 | orchestrator | Thursday 18 September 2025 11:00:39 +0000 (0:00:00.800) 0:04:50.219 **** 2025-09-18 11:04:20.310965 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.310992 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311007 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311018 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311028 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311055 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311071 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311081 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311096 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311107 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311117 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311128 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311148 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311159 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311169 | orchestrator | 2025-09-18 11:04:20.311179 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-09-18 11:04:20.311189 | orchestrator | Thursday 18 September 2025 11:00:42 +0000 (0:00:03.644) 0:04:53.864 **** 2025-09-18 11:04:20.311204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.311214 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.311230 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.311240 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.311257 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.311268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.311282 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311292 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311308 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311339 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311350 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311365 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311386 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311402 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.311412 | orchestrator | 2025-09-18 11:04:20.311422 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-09-18 11:04:20.311432 | orchestrator | Thursday 18 September 2025 11:00:49 +0000 (0:00:06.535) 0:05:00.400 **** 2025-09-18 11:04:20.311442 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.311452 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.311462 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.311471 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.311481 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.311491 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.311501 | orchestrator | 2025-09-18 11:04:20.311510 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-09-18 11:04:20.311520 | orchestrator | Thursday 18 September 2025 11:00:50 +0000 (0:00:01.288) 0:05:01.688 **** 2025-09-18 11:04:20.311530 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 11:04:20.311539 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 11:04:20.311549 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-09-18 11:04:20.311559 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 11:04:20.311573 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 11:04:20.311584 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.311594 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 11:04:20.311604 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 11:04:20.311614 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.311623 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-09-18 11:04:20.311633 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.311643 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-09-18 11:04:20.311653 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 11:04:20.311663 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 11:04:20.311673 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-09-18 11:04:20.311682 | orchestrator | 2025-09-18 11:04:20.311692 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-09-18 11:04:20.311702 | orchestrator | Thursday 18 September 2025 11:00:54 +0000 (0:00:03.864) 0:05:05.553 **** 2025-09-18 11:04:20.311712 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.311721 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.311731 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.311744 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.311754 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.311763 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.311773 | orchestrator | 2025-09-18 11:04:20.311783 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-09-18 11:04:20.311797 | orchestrator | Thursday 18 September 2025 11:00:55 +0000 (0:00:00.604) 0:05:06.157 **** 2025-09-18 11:04:20.311807 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 11:04:20.311817 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 11:04:20.311827 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-09-18 11:04:20.311836 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 11:04:20.311846 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 11:04:20.311856 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311865 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-09-18 11:04:20.311875 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311885 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311894 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311904 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.311914 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311924 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.311934 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-09-18 11:04:20.311943 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.311953 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.311963 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.311972 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.311982 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.311992 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.312001 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-09-18 11:04:20.312011 | orchestrator | 2025-09-18 11:04:20.312021 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-09-18 11:04:20.312031 | orchestrator | Thursday 18 September 2025 11:01:00 +0000 (0:00:05.197) 0:05:11.354 **** 2025-09-18 11:04:20.312041 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 11:04:20.312051 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 11:04:20.312064 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-09-18 11:04:20.312074 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 11:04:20.312090 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 11:04:20.312099 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 11:04:20.312109 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 11:04:20.312119 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-09-18 11:04:20.312129 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-09-18 11:04:20.312138 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 11:04:20.312148 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 11:04:20.312158 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-09-18 11:04:20.312167 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 11:04:20.312177 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.312187 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 11:04:20.312196 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.312206 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-09-18 11:04:20.312216 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.312233 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 11:04:20.312242 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 11:04:20.312252 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-09-18 11:04:20.312262 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 11:04:20.312272 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 11:04:20.312282 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-09-18 11:04:20.312291 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 11:04:20.312301 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 11:04:20.312310 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-09-18 11:04:20.312367 | orchestrator | 2025-09-18 11:04:20.312378 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-09-18 11:04:20.312388 | orchestrator | Thursday 18 September 2025 11:01:07 +0000 (0:00:06.772) 0:05:18.126 **** 2025-09-18 11:04:20.312398 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.312408 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.312418 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.312427 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.312437 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.312447 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.312456 | orchestrator | 2025-09-18 11:04:20.312466 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-09-18 11:04:20.312476 | orchestrator | Thursday 18 September 2025 11:01:07 +0000 (0:00:00.635) 0:05:18.762 **** 2025-09-18 11:04:20.312486 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.312496 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.312506 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.312515 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.312525 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.312535 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.312544 | orchestrator | 2025-09-18 11:04:20.312554 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-09-18 11:04:20.312571 | orchestrator | Thursday 18 September 2025 11:01:08 +0000 (0:00:00.542) 0:05:19.305 **** 2025-09-18 11:04:20.312581 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.312590 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.312600 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.312610 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.312619 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.312629 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.312639 | orchestrator | 2025-09-18 11:04:20.312648 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-09-18 11:04:20.312658 | orchestrator | Thursday 18 September 2025 11:01:10 +0000 (0:00:02.044) 0:05:21.349 **** 2025-09-18 11:04:20.312674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.312685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.312700 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.312722 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.312732 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.312749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312760 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.312775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-09-18 11:04:20.312785 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-09-18 11:04:20.312800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312811 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.312821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.312838 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312849 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.312859 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.312874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312884 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.312892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-09-18 11:04:20.312904 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-09-18 11:04:20.312913 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.312921 | orchestrator | 2025-09-18 11:04:20.312929 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-09-18 11:04:20.312937 | orchestrator | Thursday 18 September 2025 11:01:11 +0000 (0:00:01.243) 0:05:22.592 **** 2025-09-18 11:04:20.312945 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-18 11:04:20.312954 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-18 11:04:20.312967 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.312975 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-18 11:04:20.312983 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-18 11:04:20.312991 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.313000 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-18 11:04:20.313007 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-18 11:04:20.313015 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.313023 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-18 11:04:20.313031 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-18 11:04:20.313039 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.313047 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-18 11:04:20.313055 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-18 11:04:20.313063 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.313071 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-18 11:04:20.313079 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-18 11:04:20.313087 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.313095 | orchestrator | 2025-09-18 11:04:20.313103 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-09-18 11:04:20.313111 | orchestrator | Thursday 18 September 2025 11:01:12 +0000 (0:00:00.911) 0:05:23.504 **** 2025-09-18 11:04:20.313120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313133 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313145 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-libvirt:2024.2', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313168 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313176 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313185 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/nova-novncproxy:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313198 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313207 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-ssh:2024.2', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313220 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313250 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/nova-conductor:2024.2', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313263 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313272 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/nova-compute:2024.2', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:20.313285 | orchestrator | 2025-09-18 11:04:20.313293 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-09-18 11:04:20.313301 | orchestrator | Thursday 18 September 2025 11:01:15 +0000 (0:00:02.817) 0:05:26.321 **** 2025-09-18 11:04:20.313309 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.313318 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.313340 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.313349 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.313357 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.313369 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.313377 | orchestrator | 2025-09-18 11:04:20.313385 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313393 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.852) 0:05:27.174 **** 2025-09-18 11:04:20.313401 | orchestrator | 2025-09-18 11:04:20.313409 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313417 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.130) 0:05:27.305 **** 2025-09-18 11:04:20.313425 | orchestrator | 2025-09-18 11:04:20.313432 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313440 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.165) 0:05:27.470 **** 2025-09-18 11:04:20.313448 | orchestrator | 2025-09-18 11:04:20.313456 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313464 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.158) 0:05:27.628 **** 2025-09-18 11:04:20.313472 | orchestrator | 2025-09-18 11:04:20.313480 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313488 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.129) 0:05:27.758 **** 2025-09-18 11:04:20.313496 | orchestrator | 2025-09-18 11:04:20.313504 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-09-18 11:04:20.313511 | orchestrator | Thursday 18 September 2025 11:01:16 +0000 (0:00:00.128) 0:05:27.887 **** 2025-09-18 11:04:20.313519 | orchestrator | 2025-09-18 11:04:20.313527 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-09-18 11:04:20.313535 | orchestrator | Thursday 18 September 2025 11:01:17 +0000 (0:00:00.325) 0:05:28.212 **** 2025-09-18 11:04:20.313543 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.313551 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.313559 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.313567 | orchestrator | 2025-09-18 11:04:20.313575 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-09-18 11:04:20.313583 | orchestrator | Thursday 18 September 2025 11:01:29 +0000 (0:00:12.287) 0:05:40.500 **** 2025-09-18 11:04:20.313591 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.313599 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.313607 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.313615 | orchestrator | 2025-09-18 11:04:20.313623 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-09-18 11:04:20.313631 | orchestrator | Thursday 18 September 2025 11:01:46 +0000 (0:00:17.028) 0:05:57.528 **** 2025-09-18 11:04:20.313639 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.313647 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.313655 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.313663 | orchestrator | 2025-09-18 11:04:20.313671 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-09-18 11:04:20.313679 | orchestrator | Thursday 18 September 2025 11:02:05 +0000 (0:00:19.227) 0:06:16.755 **** 2025-09-18 11:04:20.313687 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.313695 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.313703 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.313711 | orchestrator | 2025-09-18 11:04:20.313719 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-09-18 11:04:20.313732 | orchestrator | Thursday 18 September 2025 11:02:44 +0000 (0:00:38.570) 0:06:55.325 **** 2025-09-18 11:04:20.313740 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.313748 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.313756 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.313764 | orchestrator | 2025-09-18 11:04:20.313772 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-09-18 11:04:20.313780 | orchestrator | Thursday 18 September 2025 11:02:45 +0000 (0:00:00.838) 0:06:56.164 **** 2025-09-18 11:04:20.313788 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.313796 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.313804 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.313812 | orchestrator | 2025-09-18 11:04:20.313820 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-09-18 11:04:20.313832 | orchestrator | Thursday 18 September 2025 11:02:45 +0000 (0:00:00.796) 0:06:56.960 **** 2025-09-18 11:04:20.313840 | orchestrator | changed: [testbed-node-5] 2025-09-18 11:04:20.313849 | orchestrator | changed: [testbed-node-3] 2025-09-18 11:04:20.313857 | orchestrator | changed: [testbed-node-4] 2025-09-18 11:04:20.313865 | orchestrator | 2025-09-18 11:04:20.313873 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-09-18 11:04:20.313881 | orchestrator | Thursday 18 September 2025 11:03:11 +0000 (0:00:25.861) 0:07:22.822 **** 2025-09-18 11:04:20.313889 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.313897 | orchestrator | 2025-09-18 11:04:20.313905 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-09-18 11:04:20.313913 | orchestrator | Thursday 18 September 2025 11:03:11 +0000 (0:00:00.123) 0:07:22.945 **** 2025-09-18 11:04:20.313921 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.313929 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.313936 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.313944 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.313952 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.313960 | orchestrator | FAILED - RETRYING: [testbed-node-3 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-09-18 11:04:20.313968 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 11:04:20.313976 | orchestrator | 2025-09-18 11:04:20.313984 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-09-18 11:04:20.313992 | orchestrator | Thursday 18 September 2025 11:03:33 +0000 (0:00:21.514) 0:07:44.459 **** 2025-09-18 11:04:20.314000 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.314008 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.314037 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.314047 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.314055 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.314063 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.314071 | orchestrator | 2025-09-18 11:04:20.314086 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-09-18 11:04:20.314094 | orchestrator | Thursday 18 September 2025 11:03:42 +0000 (0:00:08.930) 0:07:53.390 **** 2025-09-18 11:04:20.314102 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.314110 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.314118 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.314126 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.314133 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.314141 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-3 2025-09-18 11:04:20.314150 | orchestrator | 2025-09-18 11:04:20.314158 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-09-18 11:04:20.314166 | orchestrator | Thursday 18 September 2025 11:03:46 +0000 (0:00:03.824) 0:07:57.215 **** 2025-09-18 11:04:20.314174 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 11:04:20.314187 | orchestrator | 2025-09-18 11:04:20.314195 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-09-18 11:04:20.314203 | orchestrator | Thursday 18 September 2025 11:03:58 +0000 (0:00:12.682) 0:08:09.898 **** 2025-09-18 11:04:20.314211 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 11:04:20.314219 | orchestrator | 2025-09-18 11:04:20.314227 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-09-18 11:04:20.314235 | orchestrator | Thursday 18 September 2025 11:04:00 +0000 (0:00:01.411) 0:08:11.310 **** 2025-09-18 11:04:20.314243 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.314251 | orchestrator | 2025-09-18 11:04:20.314259 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-09-18 11:04:20.314267 | orchestrator | Thursday 18 September 2025 11:04:01 +0000 (0:00:01.272) 0:08:12.582 **** 2025-09-18 11:04:20.314275 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-09-18 11:04:20.314283 | orchestrator | 2025-09-18 11:04:20.314290 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-09-18 11:04:20.314299 | orchestrator | Thursday 18 September 2025 11:04:12 +0000 (0:00:11.196) 0:08:23.779 **** 2025-09-18 11:04:20.314306 | orchestrator | ok: [testbed-node-3] 2025-09-18 11:04:20.314315 | orchestrator | ok: [testbed-node-4] 2025-09-18 11:04:20.314377 | orchestrator | ok: [testbed-node-5] 2025-09-18 11:04:20.314385 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:20.314393 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:20.314401 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:20.314409 | orchestrator | 2025-09-18 11:04:20.314417 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-09-18 11:04:20.314425 | orchestrator | 2025-09-18 11:04:20.314433 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-09-18 11:04:20.314441 | orchestrator | Thursday 18 September 2025 11:04:14 +0000 (0:00:01.936) 0:08:25.715 **** 2025-09-18 11:04:20.314449 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:20.314457 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:20.314465 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:20.314472 | orchestrator | 2025-09-18 11:04:20.314480 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-09-18 11:04:20.314488 | orchestrator | 2025-09-18 11:04:20.314496 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-09-18 11:04:20.314504 | orchestrator | Thursday 18 September 2025 11:04:16 +0000 (0:00:01.357) 0:08:27.072 **** 2025-09-18 11:04:20.314512 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.314520 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.314528 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.314536 | orchestrator | 2025-09-18 11:04:20.314544 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-09-18 11:04:20.314552 | orchestrator | 2025-09-18 11:04:20.314560 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-09-18 11:04:20.314568 | orchestrator | Thursday 18 September 2025 11:04:16 +0000 (0:00:00.519) 0:08:27.592 **** 2025-09-18 11:04:20.314576 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-09-18 11:04:20.314590 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-09-18 11:04:20.314599 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314607 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-09-18 11:04:20.314615 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-09-18 11:04:20.314623 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314631 | orchestrator | skipping: [testbed-node-3] 2025-09-18 11:04:20.314639 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-09-18 11:04:20.314647 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-09-18 11:04:20.314655 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314670 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-09-18 11:04:20.314678 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-09-18 11:04:20.314686 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314694 | orchestrator | skipping: [testbed-node-4] 2025-09-18 11:04:20.314702 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-09-18 11:04:20.314710 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-09-18 11:04:20.314718 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314726 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-09-18 11:04:20.314734 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-09-18 11:04:20.314742 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314750 | orchestrator | skipping: [testbed-node-5] 2025-09-18 11:04:20.314758 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-09-18 11:04:20.314765 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-09-18 11:04:20.314778 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314786 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-09-18 11:04:20.314794 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-09-18 11:04:20.314802 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314810 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.314818 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-09-18 11:04:20.314826 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-09-18 11:04:20.314834 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314842 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-09-18 11:04:20.314850 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-09-18 11:04:20.314858 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314866 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.314874 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-09-18 11:04:20.314882 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-09-18 11:04:20.314890 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-09-18 11:04:20.314898 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-09-18 11:04:20.314905 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-09-18 11:04:20.314913 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-09-18 11:04:20.314921 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.314929 | orchestrator | 2025-09-18 11:04:20.314937 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-09-18 11:04:20.314945 | orchestrator | 2025-09-18 11:04:20.314953 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-09-18 11:04:20.314961 | orchestrator | Thursday 18 September 2025 11:04:17 +0000 (0:00:01.388) 0:08:28.981 **** 2025-09-18 11:04:20.314969 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-09-18 11:04:20.314977 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-09-18 11:04:20.314985 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.314993 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-09-18 11:04:20.315001 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-09-18 11:04:20.315009 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.315017 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-09-18 11:04:20.315025 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-09-18 11:04:20.315033 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.315041 | orchestrator | 2025-09-18 11:04:20.315055 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-09-18 11:04:20.315063 | orchestrator | 2025-09-18 11:04:20.315071 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-09-18 11:04:20.315079 | orchestrator | Thursday 18 September 2025 11:04:18 +0000 (0:00:00.718) 0:08:29.699 **** 2025-09-18 11:04:20.315087 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.315095 | orchestrator | 2025-09-18 11:04:20.315103 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-09-18 11:04:20.315111 | orchestrator | 2025-09-18 11:04:20.315119 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-09-18 11:04:20.315127 | orchestrator | Thursday 18 September 2025 11:04:19 +0000 (0:00:00.665) 0:08:30.364 **** 2025-09-18 11:04:20.315135 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:20.315143 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:20.315150 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:20.315158 | orchestrator | 2025-09-18 11:04:20.315166 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 11:04:20.315175 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-09-18 11:04:20.315187 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-09-18 11:04:20.315196 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-18 11:04:20.315204 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-09-18 11:04:20.315212 | orchestrator | testbed-node-3 : ok=43  changed=27  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-09-18 11:04:20.315220 | orchestrator | testbed-node-4 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-18 11:04:20.315228 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-09-18 11:04:20.315236 | orchestrator | 2025-09-18 11:04:20.315244 | orchestrator | 2025-09-18 11:04:20.315252 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 11:04:20.315260 | orchestrator | Thursday 18 September 2025 11:04:19 +0000 (0:00:00.428) 0:08:30.792 **** 2025-09-18 11:04:20.315268 | orchestrator | =============================================================================== 2025-09-18 11:04:20.315276 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 38.57s 2025-09-18 11:04:20.315288 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 31.06s 2025-09-18 11:04:20.315296 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 25.86s 2025-09-18 11:04:20.315304 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 23.68s 2025-09-18 11:04:20.315312 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 21.51s 2025-09-18 11:04:20.315352 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 21.36s 2025-09-18 11:04:20.315361 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 19.23s 2025-09-18 11:04:20.315369 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 17.82s 2025-09-18 11:04:20.315377 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 17.03s 2025-09-18 11:04:20.315385 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 14.50s 2025-09-18 11:04:20.315393 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.93s 2025-09-18 11:04:20.315401 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.88s 2025-09-18 11:04:20.315414 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.68s 2025-09-18 11:04:20.315422 | orchestrator | nova-cell : Create cell ------------------------------------------------ 12.42s 2025-09-18 11:04:20.315430 | orchestrator | nova-cell : Restart nova-conductor container --------------------------- 12.29s 2025-09-18 11:04:20.315438 | orchestrator | nova-cell : Discover nova hosts ---------------------------------------- 11.20s 2025-09-18 11:04:20.315446 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 8.93s 2025-09-18 11:04:20.315454 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.43s 2025-09-18 11:04:20.315462 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 7.82s 2025-09-18 11:04:20.315470 | orchestrator | service-ks-register : nova | Creating endpoints ------------------------- 6.81s 2025-09-18 11:04:20.315478 | orchestrator | 2025-09-18 11:04:20 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:20.315486 | orchestrator | 2025-09-18 11:04:20 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:23.342732 | orchestrator | 2025-09-18 11:04:23 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:23.342858 | orchestrator | 2025-09-18 11:04:23 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:26.390451 | orchestrator | 2025-09-18 11:04:26 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:26.390568 | orchestrator | 2025-09-18 11:04:26 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:29.440160 | orchestrator | 2025-09-18 11:04:29 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:29.440242 | orchestrator | 2025-09-18 11:04:29 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:32.478572 | orchestrator | 2025-09-18 11:04:32 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:32.478666 | orchestrator | 2025-09-18 11:04:32 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:35.518259 | orchestrator | 2025-09-18 11:04:35 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:35.518411 | orchestrator | 2025-09-18 11:04:35 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:38.564760 | orchestrator | 2025-09-18 11:04:38 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:38.565573 | orchestrator | 2025-09-18 11:04:38 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:41.615807 | orchestrator | 2025-09-18 11:04:41 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:41.615905 | orchestrator | 2025-09-18 11:04:41 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:44.663394 | orchestrator | 2025-09-18 11:04:44 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:44.663494 | orchestrator | 2025-09-18 11:04:44 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:47.706774 | orchestrator | 2025-09-18 11:04:47 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state STARTED 2025-09-18 11:04:47.706877 | orchestrator | 2025-09-18 11:04:47 | INFO  | Wait 1 second(s) until the next check 2025-09-18 11:04:50.751996 | orchestrator | 2025-09-18 11:04:50 | INFO  | Task b79179f2-264e-496e-889a-a7f4f9f88966 is in state SUCCESS 2025-09-18 11:04:50.752273 | orchestrator | 2025-09-18 11:04:50 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:04:50.754689 | orchestrator | 2025-09-18 11:04:50.754770 | orchestrator | 2025-09-18 11:04:50.754783 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-09-18 11:04:50.754887 | orchestrator | 2025-09-18 11:04:50.754902 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-09-18 11:04:50.754913 | orchestrator | Thursday 18 September 2025 10:59:31 +0000 (0:00:00.259) 0:00:00.259 **** 2025-09-18 11:04:50.754937 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.754950 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:50.754961 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:50.754987 | orchestrator | 2025-09-18 11:04:50.755044 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-09-18 11:04:50.755057 | orchestrator | Thursday 18 September 2025 10:59:32 +0000 (0:00:00.329) 0:00:00.588 **** 2025-09-18 11:04:50.755114 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-09-18 11:04:50.755128 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-09-18 11:04:50.755167 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-09-18 11:04:50.755179 | orchestrator | 2025-09-18 11:04:50.755192 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-09-18 11:04:50.755238 | orchestrator | 2025-09-18 11:04:50.755251 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.755264 | orchestrator | Thursday 18 September 2025 10:59:32 +0000 (0:00:00.477) 0:00:01.065 **** 2025-09-18 11:04:50.755289 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:50.755330 | orchestrator | 2025-09-18 11:04:50.755343 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-09-18 11:04:50.755356 | orchestrator | Thursday 18 September 2025 10:59:33 +0000 (0:00:00.557) 0:00:01.623 **** 2025-09-18 11:04:50.755369 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-09-18 11:04:50.755381 | orchestrator | 2025-09-18 11:04:50.755393 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-09-18 11:04:50.755406 | orchestrator | Thursday 18 September 2025 10:59:36 +0000 (0:00:03.591) 0:00:05.214 **** 2025-09-18 11:04:50.755418 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-09-18 11:04:50.755430 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-09-18 11:04:50.755443 | orchestrator | 2025-09-18 11:04:50.755456 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-09-18 11:04:50.755468 | orchestrator | Thursday 18 September 2025 10:59:43 +0000 (0:00:06.594) 0:00:11.808 **** 2025-09-18 11:04:50.755481 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-09-18 11:04:50.755494 | orchestrator | 2025-09-18 11:04:50.755507 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-09-18 11:04:50.755519 | orchestrator | Thursday 18 September 2025 10:59:46 +0000 (0:00:03.381) 0:00:15.190 **** 2025-09-18 11:04:50.755532 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-09-18 11:04:50.755544 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-18 11:04:50.755557 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-09-18 11:04:50.755569 | orchestrator | 2025-09-18 11:04:50.755581 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-09-18 11:04:50.755593 | orchestrator | Thursday 18 September 2025 10:59:55 +0000 (0:00:08.541) 0:00:23.731 **** 2025-09-18 11:04:50.755605 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-09-18 11:04:50.755617 | orchestrator | 2025-09-18 11:04:50.755630 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-09-18 11:04:50.755641 | orchestrator | Thursday 18 September 2025 10:59:58 +0000 (0:00:03.599) 0:00:27.331 **** 2025-09-18 11:04:50.755652 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-18 11:04:50.755663 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-09-18 11:04:50.755674 | orchestrator | 2025-09-18 11:04:50.755685 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-09-18 11:04:50.755706 | orchestrator | Thursday 18 September 2025 11:00:07 +0000 (0:00:08.384) 0:00:35.716 **** 2025-09-18 11:04:50.755717 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-09-18 11:04:50.755727 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-09-18 11:04:50.755738 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-09-18 11:04:50.755749 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-09-18 11:04:50.755760 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-09-18 11:04:50.755770 | orchestrator | 2025-09-18 11:04:50.755781 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.755792 | orchestrator | Thursday 18 September 2025 11:00:23 +0000 (0:00:16.185) 0:00:51.902 **** 2025-09-18 11:04:50.755802 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:50.755813 | orchestrator | 2025-09-18 11:04:50.755824 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-09-18 11:04:50.755835 | orchestrator | Thursday 18 September 2025 11:00:23 +0000 (0:00:00.603) 0:00:52.505 **** 2025-09-18 11:04:50.755846 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.755857 | orchestrator | 2025-09-18 11:04:50.755867 | orchestrator | TASK [octavia : Create nova keypair for amphora] ******************************* 2025-09-18 11:04:50.755878 | orchestrator | Thursday 18 September 2025 11:00:48 +0000 (0:00:24.771) 0:01:17.276 **** 2025-09-18 11:04:50.755890 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.755900 | orchestrator | 2025-09-18 11:04:50.755911 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-18 11:04:50.755937 | orchestrator | Thursday 18 September 2025 11:00:53 +0000 (0:00:04.718) 0:01:21.995 **** 2025-09-18 11:04:50.755948 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.755959 | orchestrator | 2025-09-18 11:04:50.755970 | orchestrator | TASK [octavia : Create security groups for octavia] **************************** 2025-09-18 11:04:50.755981 | orchestrator | Thursday 18 September 2025 11:00:56 +0000 (0:00:03.253) 0:01:25.248 **** 2025-09-18 11:04:50.755999 | orchestrator | changed: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-18 11:04:50.756010 | orchestrator | changed: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-18 11:04:50.756021 | orchestrator | 2025-09-18 11:04:50.756032 | orchestrator | TASK [octavia : Add rules for security groups] ********************************* 2025-09-18 11:04:50.756043 | orchestrator | Thursday 18 September 2025 11:01:06 +0000 (0:00:09.987) 0:01:35.236 **** 2025-09-18 11:04:50.756054 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'icmp'}]) 2025-09-18 11:04:50.756065 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': 22, 'dst_port': 22}]) 2025-09-18 11:04:50.756078 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-mgmt-sec-grp', 'enabled': True}, {'protocol': 'tcp', 'src_port': '9443', 'dst_port': '9443'}]) 2025-09-18 11:04:50.756090 | orchestrator | changed: [testbed-node-0] => (item=[{'name': 'lb-health-mgr-sec-grp', 'enabled': True}, {'protocol': 'udp', 'src_port': '5555', 'dst_port': '5555'}]) 2025-09-18 11:04:50.756101 | orchestrator | 2025-09-18 11:04:50.756112 | orchestrator | TASK [octavia : Create loadbalancer management network] ************************ 2025-09-18 11:04:50.756123 | orchestrator | Thursday 18 September 2025 11:01:23 +0000 (0:00:16.781) 0:01:52.017 **** 2025-09-18 11:04:50.756133 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756144 | orchestrator | 2025-09-18 11:04:50.756155 | orchestrator | TASK [octavia : Create loadbalancer management subnet] ************************* 2025-09-18 11:04:50.756166 | orchestrator | Thursday 18 September 2025 11:01:28 +0000 (0:00:04.854) 0:01:56.872 **** 2025-09-18 11:04:50.756177 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756188 | orchestrator | 2025-09-18 11:04:50.756199 | orchestrator | TASK [octavia : Create loadbalancer management router for IPv6] **************** 2025-09-18 11:04:50.756215 | orchestrator | Thursday 18 September 2025 11:01:34 +0000 (0:00:06.238) 0:02:03.111 **** 2025-09-18 11:04:50.756225 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.756236 | orchestrator | 2025-09-18 11:04:50.756247 | orchestrator | TASK [octavia : Update loadbalancer management subnet] ************************* 2025-09-18 11:04:50.756258 | orchestrator | Thursday 18 September 2025 11:01:34 +0000 (0:00:00.249) 0:02:03.360 **** 2025-09-18 11:04:50.756268 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756280 | orchestrator | 2025-09-18 11:04:50.756290 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.756321 | orchestrator | Thursday 18 September 2025 11:01:40 +0000 (0:00:06.147) 0:02:09.507 **** 2025-09-18 11:04:50.756332 | orchestrator | included: /ansible/roles/octavia/tasks/hm-interface.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:50.756343 | orchestrator | 2025-09-18 11:04:50.756354 | orchestrator | TASK [octavia : Create ports for Octavia health-manager nodes] ***************** 2025-09-18 11:04:50.756365 | orchestrator | Thursday 18 September 2025 11:01:41 +0000 (0:00:01.044) 0:02:10.552 **** 2025-09-18 11:04:50.756376 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756387 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756398 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756408 | orchestrator | 2025-09-18 11:04:50.756419 | orchestrator | TASK [octavia : Update Octavia health manager port host_id] ******************** 2025-09-18 11:04:50.756430 | orchestrator | Thursday 18 September 2025 11:01:48 +0000 (0:00:06.078) 0:02:16.630 **** 2025-09-18 11:04:50.756441 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756452 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756463 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756474 | orchestrator | 2025-09-18 11:04:50.756484 | orchestrator | TASK [octavia : Add Octavia port to openvswitch br-int] ************************ 2025-09-18 11:04:50.756495 | orchestrator | Thursday 18 September 2025 11:01:52 +0000 (0:00:04.649) 0:02:21.280 **** 2025-09-18 11:04:50.756506 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756517 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756528 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756539 | orchestrator | 2025-09-18 11:04:50.756550 | orchestrator | TASK [octavia : Install isc-dhcp-client package] ******************************* 2025-09-18 11:04:50.756560 | orchestrator | Thursday 18 September 2025 11:01:53 +0000 (0:00:00.829) 0:02:22.109 **** 2025-09-18 11:04:50.756571 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:50.756582 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.756593 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:50.756604 | orchestrator | 2025-09-18 11:04:50.756615 | orchestrator | TASK [octavia : Create octavia dhclient conf] ********************************** 2025-09-18 11:04:50.756626 | orchestrator | Thursday 18 September 2025 11:01:56 +0000 (0:00:03.404) 0:02:25.514 **** 2025-09-18 11:04:50.756636 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756647 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756658 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756669 | orchestrator | 2025-09-18 11:04:50.756680 | orchestrator | TASK [octavia : Create octavia-interface service] ****************************** 2025-09-18 11:04:50.756690 | orchestrator | Thursday 18 September 2025 11:01:58 +0000 (0:00:01.306) 0:02:26.820 **** 2025-09-18 11:04:50.756701 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756712 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756723 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756734 | orchestrator | 2025-09-18 11:04:50.756745 | orchestrator | TASK [octavia : Restart octavia-interface.service if required] ***************** 2025-09-18 11:04:50.756756 | orchestrator | Thursday 18 September 2025 11:01:59 +0000 (0:00:01.224) 0:02:28.045 **** 2025-09-18 11:04:50.756767 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756778 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756789 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756806 | orchestrator | 2025-09-18 11:04:50.756825 | orchestrator | TASK [octavia : Enable and start octavia-interface.service] ******************** 2025-09-18 11:04:50.756837 | orchestrator | Thursday 18 September 2025 11:02:01 +0000 (0:00:02.090) 0:02:30.135 **** 2025-09-18 11:04:50.756848 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.756859 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.756870 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.756881 | orchestrator | 2025-09-18 11:04:50.756897 | orchestrator | TASK [octavia : Wait for interface ohm0 ip appear] ***************************** 2025-09-18 11:04:50.756908 | orchestrator | Thursday 18 September 2025 11:02:03 +0000 (0:00:01.641) 0:02:31.777 **** 2025-09-18 11:04:50.756919 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.756929 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:50.756940 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:50.756951 | orchestrator | 2025-09-18 11:04:50.756962 | orchestrator | TASK [octavia : Gather facts] ************************************************** 2025-09-18 11:04:50.756973 | orchestrator | Thursday 18 September 2025 11:02:04 +0000 (0:00:00.922) 0:02:32.700 **** 2025-09-18 11:04:50.756984 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:50.756994 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.757005 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:50.757016 | orchestrator | 2025-09-18 11:04:50.757026 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.757037 | orchestrator | Thursday 18 September 2025 11:02:06 +0000 (0:00:02.780) 0:02:35.481 **** 2025-09-18 11:04:50.757048 | orchestrator | included: /ansible/roles/octavia/tasks/get_resources_info.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:50.757059 | orchestrator | 2025-09-18 11:04:50.757070 | orchestrator | TASK [octavia : Get amphora flavor info] *************************************** 2025-09-18 11:04:50.757081 | orchestrator | Thursday 18 September 2025 11:02:07 +0000 (0:00:00.488) 0:02:35.969 **** 2025-09-18 11:04:50.757092 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.757102 | orchestrator | 2025-09-18 11:04:50.757113 | orchestrator | TASK [octavia : Get service project id] **************************************** 2025-09-18 11:04:50.757124 | orchestrator | Thursday 18 September 2025 11:02:11 +0000 (0:00:04.434) 0:02:40.403 **** 2025-09-18 11:04:50.757134 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.757145 | orchestrator | 2025-09-18 11:04:50.757156 | orchestrator | TASK [octavia : Get security groups for octavia] ******************************* 2025-09-18 11:04:50.757167 | orchestrator | Thursday 18 September 2025 11:02:15 +0000 (0:00:03.352) 0:02:43.755 **** 2025-09-18 11:04:50.757178 | orchestrator | ok: [testbed-node-0] => (item=lb-mgmt-sec-grp) 2025-09-18 11:04:50.757189 | orchestrator | ok: [testbed-node-0] => (item=lb-health-mgr-sec-grp) 2025-09-18 11:04:50.757199 | orchestrator | 2025-09-18 11:04:50.757210 | orchestrator | TASK [octavia : Get loadbalancer management network] *************************** 2025-09-18 11:04:50.757221 | orchestrator | Thursday 18 September 2025 11:02:22 +0000 (0:00:07.019) 0:02:50.775 **** 2025-09-18 11:04:50.757232 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.757243 | orchestrator | 2025-09-18 11:04:50.757254 | orchestrator | TASK [octavia : Set octavia resources facts] *********************************** 2025-09-18 11:04:50.757265 | orchestrator | Thursday 18 September 2025 11:02:25 +0000 (0:00:03.735) 0:02:54.510 **** 2025-09-18 11:04:50.757276 | orchestrator | ok: [testbed-node-0] 2025-09-18 11:04:50.757287 | orchestrator | ok: [testbed-node-1] 2025-09-18 11:04:50.757339 | orchestrator | ok: [testbed-node-2] 2025-09-18 11:04:50.757352 | orchestrator | 2025-09-18 11:04:50.757363 | orchestrator | TASK [octavia : Ensuring config directories exist] ***************************** 2025-09-18 11:04:50.757374 | orchestrator | Thursday 18 September 2025 11:02:26 +0000 (0:00:00.341) 0:02:54.852 **** 2025-09-18 11:04:50.757389 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.757420 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.757447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.757460 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.757473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.757485 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.757506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757518 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757542 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757555 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757567 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757579 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757590 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757620 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.757632 | orchestrator | 2025-09-18 11:04:50.757643 | orchestrator | TASK [octavia : Check if policies shall be overwritten] ************************ 2025-09-18 11:04:50.757655 | orchestrator | Thursday 18 September 2025 11:02:28 +0000 (0:00:02.562) 0:02:57.415 **** 2025-09-18 11:04:50.757666 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.757677 | orchestrator | 2025-09-18 11:04:50.757694 | orchestrator | TASK [octavia : Set octavia policy file] *************************************** 2025-09-18 11:04:50.757705 | orchestrator | Thursday 18 September 2025 11:02:28 +0000 (0:00:00.140) 0:02:57.556 **** 2025-09-18 11:04:50.757716 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.757727 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:50.757738 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:50.757749 | orchestrator | 2025-09-18 11:04:50.757765 | orchestrator | TASK [octavia : Copying over existing policy file] ***************************** 2025-09-18 11:04:50.757776 | orchestrator | Thursday 18 September 2025 11:02:29 +0000 (0:00:00.542) 0:02:58.098 **** 2025-09-18 11:04:50.757787 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.757799 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.757817 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.757829 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.757841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.757853 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.757877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.757890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.757901 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.757919 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.757931 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.757942 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:50.757954 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.757975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.757992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.758004 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.758070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.758084 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:50.758095 | orchestrator | 2025-09-18 11:04:50.758107 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.758118 | orchestrator | Thursday 18 September 2025 11:02:30 +0000 (0:00:00.766) 0:02:58.865 **** 2025-09-18 11:04:50.758129 | orchestrator | included: /ansible/roles/octavia/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-09-18 11:04:50.758140 | orchestrator | 2025-09-18 11:04:50.758151 | orchestrator | TASK [service-cert-copy : octavia | Copying over extra CA certificates] ******** 2025-09-18 11:04:50.758162 | orchestrator | Thursday 18 September 2025 11:02:30 +0000 (0:00:00.586) 0:02:59.451 **** 2025-09-18 11:04:50.758174 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.758686 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.758727 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.758763 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.758778 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.758789 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.758801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758845 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758858 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758878 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758889 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758913 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758933 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.758946 | orchestrator | 2025-09-18 11:04:50.758959 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS certificate] *** 2025-09-18 11:04:50.758971 | orchestrator | Thursday 18 September 2025 11:02:36 +0000 (0:00:05.390) 0:03:04.842 **** 2025-09-18 11:04:50.758990 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759008 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759021 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759033 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759045 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759057 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.759080 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759093 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759111 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759123 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759134 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759146 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:50.759157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759191 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759209 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759221 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759234 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:50.759248 | orchestrator | 2025-09-18 11:04:50.759260 | orchestrator | TASK [service-cert-copy : octavia | Copying over backend internal TLS key] ***** 2025-09-18 11:04:50.759273 | orchestrator | Thursday 18 September 2025 11:02:37 +0000 (0:00:00.935) 0:03:05.778 **** 2025-09-18 11:04:50.759287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759324 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759338 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759383 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759396 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.759409 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759435 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759448 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759495 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:50.759508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-09-18 11:04:50.759521 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-09-18 11:04:50.759534 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759547 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-09-18 11:04:50.759560 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-09-18 11:04:50.759584 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:50.759596 | orchestrator | 2025-09-18 11:04:50.759607 | orchestrator | TASK [octavia : Copying over config.json files for services] ******************* 2025-09-18 11:04:50.759618 | orchestrator | Thursday 18 September 2025 11:02:38 +0000 (0:00:00.908) 0:03:06.686 **** 2025-09-18 11:04:50.759643 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.759656 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.759668 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.759679 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.759691 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.759709 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.759732 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759745 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759756 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759768 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759791 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759818 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759846 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.759858 | orchestrator | 2025-09-18 11:04:50.759869 | orchestrator | TASK [octavia : Copying over octavia-wsgi.conf] ******************************** 2025-09-18 11:04:50.759880 | orchestrator | Thursday 18 September 2025 11:02:43 +0000 (0:00:05.337) 0:03:12.024 **** 2025-09-18 11:04:50.759891 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 11:04:50.759903 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 11:04:50.759915 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/octavia/templates/octavia-wsgi.conf.j2) 2025-09-18 11:04:50.759926 | orchestrator | 2025-09-18 11:04:50.759937 | orchestrator | TASK [octavia : Copying over octavia.conf] ************************************* 2025-09-18 11:04:50.759948 | orchestrator | Thursday 18 September 2025 11:02:45 +0000 (0:00:02.247) 0:03:14.272 **** 2025-09-18 11:04:50.759960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.759979 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.760003 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.760015 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760038 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760050 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760067 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760079 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760113 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760125 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760136 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760148 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760165 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760176 | orchestrator | 2025-09-18 11:04:50.760188 | orchestrator | TASK [octavia : Copying over Octavia SSH key] ********************************** 2025-09-18 11:04:50.760199 | orchestrator | Thursday 18 September 2025 11:03:02 +0000 (0:00:16.333) 0:03:30.605 **** 2025-09-18 11:04:50.760210 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.760222 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.760233 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.760244 | orchestrator | 2025-09-18 11:04:50.760256 | orchestrator | TASK [octavia : Copying certificate files for octavia-worker] ****************** 2025-09-18 11:04:50.760267 | orchestrator | Thursday 18 September 2025 11:03:03 +0000 (0:00:01.559) 0:03:32.165 **** 2025-09-18 11:04:50.760278 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760289 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760325 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760337 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760348 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760359 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760375 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760387 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760398 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760409 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760420 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760431 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760442 | orchestrator | 2025-09-18 11:04:50.760454 | orchestrator | TASK [octavia : Copying certificate files for octavia-housekeeping] ************ 2025-09-18 11:04:50.760465 | orchestrator | Thursday 18 September 2025 11:03:09 +0000 (0:00:05.466) 0:03:37.632 **** 2025-09-18 11:04:50.760476 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760487 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760498 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760509 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760520 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760531 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760542 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760553 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760564 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760582 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760593 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760604 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760615 | orchestrator | 2025-09-18 11:04:50.760626 | orchestrator | TASK [octavia : Copying certificate files for octavia-health-manager] ********** 2025-09-18 11:04:50.760637 | orchestrator | Thursday 18 September 2025 11:03:15 +0000 (0:00:06.472) 0:03:44.104 **** 2025-09-18 11:04:50.760649 | orchestrator | changed: [testbed-node-0] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760660 | orchestrator | changed: [testbed-node-2] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760671 | orchestrator | changed: [testbed-node-1] => (item=client.cert-and-key.pem) 2025-09-18 11:04:50.760682 | orchestrator | changed: [testbed-node-0] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760693 | orchestrator | changed: [testbed-node-2] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760704 | orchestrator | changed: [testbed-node-1] => (item=client_ca.cert.pem) 2025-09-18 11:04:50.760715 | orchestrator | changed: [testbed-node-0] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760726 | orchestrator | changed: [testbed-node-2] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760737 | orchestrator | changed: [testbed-node-1] => (item=server_ca.cert.pem) 2025-09-18 11:04:50.760748 | orchestrator | changed: [testbed-node-0] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760759 | orchestrator | changed: [testbed-node-2] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760770 | orchestrator | changed: [testbed-node-1] => (item=server_ca.key.pem) 2025-09-18 11:04:50.760781 | orchestrator | 2025-09-18 11:04:50.760792 | orchestrator | TASK [octavia : Check octavia containers] ************************************** 2025-09-18 11:04:50.760804 | orchestrator | Thursday 18 September 2025 11:03:21 +0000 (0:00:05.949) 0:03:50.054 **** 2025-09-18 11:04:50.760816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.760839 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.760852 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-api:2024.2', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-09-18 11:04:50.760870 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760882 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760894 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-driver-agent:2024.2', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}}) 2025-09-18 11:04:50.760905 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760923 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760942 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-health-manager:2024.2', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760972 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760983 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-housekeeping:2024.2', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}}) 2025-09-18 11:04:50.760995 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.761007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.761026 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/octavia-worker:2024.2', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}}) 2025-09-18 11:04:50.761037 | orchestrator | 2025-09-18 11:04:50.761049 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-09-18 11:04:50.761065 | orchestrator | Thursday 18 September 2025 11:03:25 +0000 (0:00:03.942) 0:03:53.996 **** 2025-09-18 11:04:50.761083 | orchestrator | skipping: [testbed-node-0] 2025-09-18 11:04:50.761095 | orchestrator | skipping: [testbed-node-1] 2025-09-18 11:04:50.761106 | orchestrator | skipping: [testbed-node-2] 2025-09-18 11:04:50.761117 | orchestrator | 2025-09-18 11:04:50.761128 | orchestrator | TASK [octavia : Creating Octavia database] ************************************* 2025-09-18 11:04:50.761139 | orchestrator | Thursday 18 September 2025 11:03:25 +0000 (0:00:00.324) 0:03:54.321 **** 2025-09-18 11:04:50.761150 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761161 | orchestrator | 2025-09-18 11:04:50.761172 | orchestrator | TASK [octavia : Creating Octavia persistence database] ************************* 2025-09-18 11:04:50.761183 | orchestrator | Thursday 18 September 2025 11:03:27 +0000 (0:00:02.136) 0:03:56.458 **** 2025-09-18 11:04:50.761194 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761205 | orchestrator | 2025-09-18 11:04:50.761217 | orchestrator | TASK [octavia : Creating Octavia database user and setting permissions] ******** 2025-09-18 11:04:50.761228 | orchestrator | Thursday 18 September 2025 11:03:30 +0000 (0:00:02.257) 0:03:58.716 **** 2025-09-18 11:04:50.761239 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761250 | orchestrator | 2025-09-18 11:04:50.761260 | orchestrator | TASK [octavia : Creating Octavia persistence database user and setting permissions] *** 2025-09-18 11:04:50.761272 | orchestrator | Thursday 18 September 2025 11:03:32 +0000 (0:00:02.438) 0:04:01.154 **** 2025-09-18 11:04:50.761283 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761294 | orchestrator | 2025-09-18 11:04:50.761350 | orchestrator | TASK [octavia : Running Octavia bootstrap container] *************************** 2025-09-18 11:04:50.761362 | orchestrator | Thursday 18 September 2025 11:03:34 +0000 (0:00:02.307) 0:04:03.462 **** 2025-09-18 11:04:50.761373 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761383 | orchestrator | 2025-09-18 11:04:50.761394 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 11:04:50.761405 | orchestrator | Thursday 18 September 2025 11:03:56 +0000 (0:00:21.935) 0:04:25.397 **** 2025-09-18 11:04:50.761416 | orchestrator | 2025-09-18 11:04:50.761427 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 11:04:50.761438 | orchestrator | Thursday 18 September 2025 11:03:56 +0000 (0:00:00.077) 0:04:25.474 **** 2025-09-18 11:04:50.761449 | orchestrator | 2025-09-18 11:04:50.761460 | orchestrator | TASK [octavia : Flush handlers] ************************************************ 2025-09-18 11:04:50.761471 | orchestrator | Thursday 18 September 2025 11:03:56 +0000 (0:00:00.073) 0:04:25.548 **** 2025-09-18 11:04:50.761482 | orchestrator | 2025-09-18 11:04:50.761493 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-api container] ********************** 2025-09-18 11:04:50.761504 | orchestrator | Thursday 18 September 2025 11:03:57 +0000 (0:00:00.063) 0:04:25.611 **** 2025-09-18 11:04:50.761515 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761526 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.761537 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.761549 | orchestrator | 2025-09-18 11:04:50.761560 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-driver-agent container] ************* 2025-09-18 11:04:50.761571 | orchestrator | Thursday 18 September 2025 11:04:14 +0000 (0:00:17.055) 0:04:42.667 **** 2025-09-18 11:04:50.761582 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.761593 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.761604 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761616 | orchestrator | 2025-09-18 11:04:50.761627 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-health-manager container] *********** 2025-09-18 11:04:50.761638 | orchestrator | Thursday 18 September 2025 11:04:22 +0000 (0:00:08.305) 0:04:50.972 **** 2025-09-18 11:04:50.761649 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761660 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.761671 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.761682 | orchestrator | 2025-09-18 11:04:50.761693 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-housekeeping container] ************* 2025-09-18 11:04:50.761711 | orchestrator | Thursday 18 September 2025 11:04:32 +0000 (0:00:10.543) 0:05:01.515 **** 2025-09-18 11:04:50.761723 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761733 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.761745 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.761756 | orchestrator | 2025-09-18 11:04:50.761767 | orchestrator | RUNNING HANDLER [octavia : Restart octavia-worker container] ******************* 2025-09-18 11:04:50.761778 | orchestrator | Thursday 18 September 2025 11:04:38 +0000 (0:00:05.538) 0:05:07.054 **** 2025-09-18 11:04:50.761789 | orchestrator | changed: [testbed-node-0] 2025-09-18 11:04:50.761800 | orchestrator | changed: [testbed-node-2] 2025-09-18 11:04:50.761811 | orchestrator | changed: [testbed-node-1] 2025-09-18 11:04:50.761822 | orchestrator | 2025-09-18 11:04:50.761833 | orchestrator | PLAY RECAP ********************************************************************* 2025-09-18 11:04:50.761845 | orchestrator | testbed-node-0 : ok=57  changed=39  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-09-18 11:04:50.761856 | orchestrator | testbed-node-1 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 11:04:50.761868 | orchestrator | testbed-node-2 : ok=33  changed=22  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-09-18 11:04:50.761879 | orchestrator | 2025-09-18 11:04:50.761890 | orchestrator | 2025-09-18 11:04:50.761901 | orchestrator | TASKS RECAP ******************************************************************** 2025-09-18 11:04:50.761912 | orchestrator | Thursday 18 September 2025 11:04:49 +0000 (0:00:10.615) 0:05:17.669 **** 2025-09-18 11:04:50.761964 | orchestrator | =============================================================================== 2025-09-18 11:04:50.761977 | orchestrator | octavia : Create amphora flavor ---------------------------------------- 24.77s 2025-09-18 11:04:50.761988 | orchestrator | octavia : Running Octavia bootstrap container -------------------------- 21.94s 2025-09-18 11:04:50.762005 | orchestrator | octavia : Restart octavia-api container -------------------------------- 17.06s 2025-09-18 11:04:50.762060 | orchestrator | octavia : Add rules for security groups -------------------------------- 16.78s 2025-09-18 11:04:50.762075 | orchestrator | octavia : Copying over octavia.conf ------------------------------------ 16.33s 2025-09-18 11:04:50.762086 | orchestrator | octavia : Adding octavia related roles --------------------------------- 16.19s 2025-09-18 11:04:50.762097 | orchestrator | octavia : Restart octavia-worker container ----------------------------- 10.62s 2025-09-18 11:04:50.762108 | orchestrator | octavia : Restart octavia-health-manager container --------------------- 10.54s 2025-09-18 11:04:50.762120 | orchestrator | octavia : Create security groups for octavia ---------------------------- 9.99s 2025-09-18 11:04:50.762131 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.54s 2025-09-18 11:04:50.762142 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 8.38s 2025-09-18 11:04:50.762153 | orchestrator | octavia : Restart octavia-driver-agent container ------------------------ 8.31s 2025-09-18 11:04:50.762164 | orchestrator | octavia : Get security groups for octavia ------------------------------- 7.02s 2025-09-18 11:04:50.762176 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.59s 2025-09-18 11:04:50.762187 | orchestrator | octavia : Copying certificate files for octavia-housekeeping ------------ 6.47s 2025-09-18 11:04:50.762198 | orchestrator | octavia : Create loadbalancer management subnet ------------------------- 6.24s 2025-09-18 11:04:50.762209 | orchestrator | octavia : Update loadbalancer management subnet ------------------------- 6.15s 2025-09-18 11:04:50.762220 | orchestrator | octavia : Create ports for Octavia health-manager nodes ----------------- 6.08s 2025-09-18 11:04:50.762231 | orchestrator | octavia : Copying certificate files for octavia-health-manager ---------- 5.95s 2025-09-18 11:04:50.762243 | orchestrator | octavia : Restart octavia-housekeeping container ------------------------ 5.54s 2025-09-18 11:04:53.797136 | orchestrator | 2025-09-18 11:04:53 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:04:56.840787 | orchestrator | 2025-09-18 11:04:56 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:04:59.885623 | orchestrator | 2025-09-18 11:04:59 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:02.923137 | orchestrator | 2025-09-18 11:05:02 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:05.970682 | orchestrator | 2025-09-18 11:05:05 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:09.013941 | orchestrator | 2025-09-18 11:05:09 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:12.048723 | orchestrator | 2025-09-18 11:05:12 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:15.093057 | orchestrator | 2025-09-18 11:05:15 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:18.132178 | orchestrator | 2025-09-18 11:05:18 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:21.180549 | orchestrator | 2025-09-18 11:05:21 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:24.218583 | orchestrator | 2025-09-18 11:05:24 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:27.263691 | orchestrator | 2025-09-18 11:05:27 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:30.310269 | orchestrator | 2025-09-18 11:05:30 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:33.353861 | orchestrator | 2025-09-18 11:05:33 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:36.391610 | orchestrator | 2025-09-18 11:05:36 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:39.434384 | orchestrator | 2025-09-18 11:05:39 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:42.478250 | orchestrator | 2025-09-18 11:05:42 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:45.516794 | orchestrator | 2025-09-18 11:05:45 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:48.556744 | orchestrator | 2025-09-18 11:05:48 | INFO  | Wait 1 second(s) until refresh of running tasks 2025-09-18 11:05:51.598764 | orchestrator | 2025-09-18 11:05:51.940899 | orchestrator | 2025-09-18 11:05:51.945606 | orchestrator | --> DEPLOY IN A NUTSHELL -- END -- Thu Sep 18 11:05:51 UTC 2025 2025-09-18 11:05:51.945634 | orchestrator | 2025-09-18 11:05:52.326049 | orchestrator | ok: Runtime: 0:35:47.356377 2025-09-18 11:05:52.596073 | 2025-09-18 11:05:52.596216 | TASK [Bootstrap services] 2025-09-18 11:05:53.343990 | orchestrator | 2025-09-18 11:05:53.344179 | orchestrator | # BOOTSTRAP 2025-09-18 11:05:53.344202 | orchestrator | 2025-09-18 11:05:53.344217 | orchestrator | + set -e 2025-09-18 11:05:53.344230 | orchestrator | + echo 2025-09-18 11:05:53.344244 | orchestrator | + echo '# BOOTSTRAP' 2025-09-18 11:05:53.344262 | orchestrator | + echo 2025-09-18 11:05:53.344345 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap-services.sh 2025-09-18 11:05:53.353731 | orchestrator | + set -e 2025-09-18 11:05:53.354421 | orchestrator | + sh -c /opt/configuration/scripts/bootstrap/300-openstack.sh 2025-09-18 11:05:55.630380 | orchestrator | 2025-09-18 11:05:55 | INFO  | It takes a moment until task 67c3058c-e189-4f74-a5f9-263c08242faa (flavor-manager) has been started and output is visible here. 2025-09-18 11:05:58.943891 | orchestrator | ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ 2025-09-18 11:05:58.943999 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:194 │ 2025-09-18 11:05:58.944021 | orchestrator | │ in run │ 2025-09-18 11:05:58.944034 | orchestrator | │ │ 2025-09-18 11:05:58.944045 | orchestrator | │ 191 │ logger.add(sys.stderr, format=log_fmt, level=level, colorize=True) │ 2025-09-18 11:05:58.944069 | orchestrator | │ 192 │ │ 2025-09-18 11:05:58.944081 | orchestrator | │ 193 │ definitions = get_flavor_definitions(name, url) │ 2025-09-18 11:05:58.944093 | orchestrator | │ ❱ 194 │ manager = FlavorManager( │ 2025-09-18 11:05:58.944104 | orchestrator | │ 195 │ │ cloud=Cloud(cloud), │ 2025-09-18 11:05:58.944115 | orchestrator | │ 196 │ │ definitions=definitions, │ 2025-09-18 11:05:58.944126 | orchestrator | │ 197 │ │ recommended=recommended, │ 2025-09-18 11:05:58.944137 | orchestrator | │ │ 2025-09-18 11:05:58.944149 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-18 11:05:58.944172 | orchestrator | │ │ cloud = 'admin' │ │ 2025-09-18 11:05:58.944183 | orchestrator | │ │ debug = False │ │ 2025-09-18 11:05:58.944194 | orchestrator | │ │ definitions = { │ │ 2025-09-18 11:05:58.944205 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-18 11:05:58.944216 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-18 11:05:58.944227 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-18 11:05:58.944237 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-18 11:05:58.944248 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-18 11:05:58.944259 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-18 11:05:58.944289 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-18 11:05:58.944301 | orchestrator | │ │ │ ], │ │ 2025-09-18 11:05:58.944312 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-18 11:05:58.944323 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944334 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-18 11:05:58.944372 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.944384 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 11:05:58.944395 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:58.944405 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 11:05:58.944416 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.944427 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-18 11:05:58.944438 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-18 11:05:58.944449 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.944459 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.944470 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944482 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-18 11:05:58.944492 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.944503 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 11:05:58.944514 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 11:05:58.944525 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 11:05:58.944553 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.944565 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-18 11:05:58.944576 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-18 11:05:58.944587 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.944597 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.944608 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944619 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-18 11:05:58.944635 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.944646 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 11:05:58.944657 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:58.944668 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.944679 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.944690 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-18 11:05:58.944700 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-18 11:05:58.944711 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.944722 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.944733 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944744 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-18 11:05:58.944755 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.944773 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 11:05:58.944784 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 11:05:58.944795 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.944806 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.944817 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-18 11:05:58.944828 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-18 11:05:58.944838 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.944849 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.944859 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944870 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-18 11:05:58.944881 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.944891 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:58.944902 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:58.944913 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.944924 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.944934 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-18 11:05:58.944945 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-18 11:05:58.944955 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.944966 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.944977 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.944987 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-18 11:05:58.944998 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.945013 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:58.945024 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 11:05:58.945042 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.972880 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.972903 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-18 11:05:58.972914 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-18 11:05:58.972925 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.972936 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.972946 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.972957 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-18 11:05:58.972967 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.972987 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 11:05:58.972998 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:58.973009 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.973019 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.973030 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-18 11:05:58.973040 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-18 11:05:58.973051 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.973061 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.973072 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.973083 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-18 11:05:58.973093 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:58.973103 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 11:05:58.973114 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-18 11:05:58.973125 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.973135 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.973146 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-18 11:05:58.973157 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-18 11:05:58.973167 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.973178 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.973188 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.973199 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-18 11:05:58.973210 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 11:05:58.973221 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:58.973232 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:58.973242 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.973253 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.973263 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-18 11:05:58.973293 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-18 11:05:58.973304 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.973321 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.973333 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:58.973343 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-18 11:05:58.973354 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 11:05:58.973371 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:58.973382 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 11:05:58.973400 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:58.973412 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:58.973422 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-18 11:05:58.973433 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-18 11:05:58.973444 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:58.973455 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:58.973465 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-18 11:05:58.973476 | orchestrator | │ │ │ ] │ │ 2025-09-18 11:05:58.973486 | orchestrator | │ │ } │ │ 2025-09-18 11:05:58.973497 | orchestrator | │ │ level = 'INFO' │ │ 2025-09-18 11:05:58.973508 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-18 11:05:58.973518 | orchestrator | │ │ log_fmt = '{time:YYYY-MM-DD HH:mm:ss} | │ │ 2025-09-18 11:05:58.973529 | orchestrator | │ │ {level: <8} | '+17 │ │ 2025-09-18 11:05:58.973540 | orchestrator | │ │ name = 'local' │ │ 2025-09-18 11:05:58.973550 | orchestrator | │ │ recommended = True │ │ 2025-09-18 11:05:58.973561 | orchestrator | │ │ url = None │ │ 2025-09-18 11:05:58.973572 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-18 11:05:58.973585 | orchestrator | │ │ 2025-09-18 11:05:58.973596 | orchestrator | │ /usr/local/lib/python3.13/site-packages/openstack_flavor_manager/main.py:101 │ 2025-09-18 11:05:58.973606 | orchestrator | │ in __init__ │ 2025-09-18 11:05:58.973617 | orchestrator | │ │ 2025-09-18 11:05:58.973628 | orchestrator | │ 98 │ │ self.required_flavors = definitions["mandatory"] │ 2025-09-18 11:05:58.973638 | orchestrator | │ 99 │ │ self.cloud = cloud │ 2025-09-18 11:05:58.973649 | orchestrator | │ 100 │ │ if recommended: │ 2025-09-18 11:05:58.973659 | orchestrator | │ ❱ 101 │ │ │ recommended_flavors = definitions["recommended"] │ 2025-09-18 11:05:58.973670 | orchestrator | │ 102 │ │ │ # Filter recommended flavors based on memory limit │ 2025-09-18 11:05:58.973681 | orchestrator | │ 103 │ │ │ limit_memory_mb = limit_memory * 1024 │ 2025-09-18 11:05:58.973691 | orchestrator | │ 104 │ │ │ filtered_recommended = [ │ 2025-09-18 11:05:58.973702 | orchestrator | │ │ 2025-09-18 11:05:58.973718 | orchestrator | │ ╭───────────────────────────────── locals ─────────────────────────────────╮ │ 2025-09-18 11:05:58.973735 | orchestrator | │ │ cloud = │ │ 2025-09-18 11:05:58.973757 | orchestrator | │ │ definitions = { │ │ 2025-09-18 11:05:58.973767 | orchestrator | │ │ │ 'reference': [ │ │ 2025-09-18 11:05:58.973778 | orchestrator | │ │ │ │ {'field': 'name', 'mandatory_prefix': 'SCS-'}, │ │ 2025-09-18 11:05:58.973789 | orchestrator | │ │ │ │ {'field': 'cpus'}, │ │ 2025-09-18 11:05:58.973799 | orchestrator | │ │ │ │ {'field': 'ram'}, │ │ 2025-09-18 11:05:58.973810 | orchestrator | │ │ │ │ {'field': 'disk'}, │ │ 2025-09-18 11:05:58.973821 | orchestrator | │ │ │ │ {'field': 'public', 'default': True}, │ │ 2025-09-18 11:05:58.973832 | orchestrator | │ │ │ │ {'field': 'disabled', 'default': False} │ │ 2025-09-18 11:05:58.973842 | orchestrator | │ │ │ ], │ │ 2025-09-18 11:05:58.973853 | orchestrator | │ │ │ 'mandatory': [ │ │ 2025-09-18 11:05:58.973869 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.003508 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1', │ │ 2025-09-18 11:05:59.003536 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.003547 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 11:05:59.003558 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:59.003568 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 11:05:59.003579 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.003589 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:1', │ │ 2025-09-18 11:05:59.003600 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-1', │ │ 2025-09-18 11:05:59.003611 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.003621 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.003632 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.003642 | orchestrator | │ │ │ │ │ 'name': 'SCS-1L-1-5', │ │ 2025-09-18 11:05:59.003653 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.003663 | orchestrator | │ │ │ │ │ 'ram': 1024, │ │ 2025-09-18 11:05:59.003674 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 11:05:59.003684 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'crowded-core', │ │ 2025-09-18 11:05:59.003694 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.003705 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1L:5', │ │ 2025-09-18 11:05:59.003715 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1L-5', │ │ 2025-09-18 11:05:59.003726 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.003747 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.003758 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.003768 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2', │ │ 2025-09-18 11:05:59.003779 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.003789 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 11:05:59.003800 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:59.003810 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.003821 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.003831 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2', │ │ 2025-09-18 11:05:59.003841 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2', │ │ 2025-09-18 11:05:59.003852 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.003862 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.003880 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.003891 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-2-5', │ │ 2025-09-18 11:05:59.003902 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.003912 | orchestrator | │ │ │ │ │ 'ram': 2048, │ │ 2025-09-18 11:05:59.003923 | orchestrator | │ │ │ │ │ 'disk': 5, │ │ 2025-09-18 11:05:59.003933 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.003944 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.003954 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:2:5', │ │ 2025-09-18 11:05:59.003965 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-2-5', │ │ 2025-09-18 11:05:59.003975 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.003986 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.004004 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.004016 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4', │ │ 2025-09-18 11:05:59.004027 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.004037 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:59.004048 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:59.004059 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.004069 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.004080 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4', │ │ 2025-09-18 11:05:59.004091 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4', │ │ 2025-09-18 11:05:59.004101 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.004117 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.004128 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.004139 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-4-10', │ │ 2025-09-18 11:05:59.004149 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.004160 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:59.004171 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 11:05:59.004181 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.004192 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.004203 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:4:10', │ │ 2025-09-18 11:05:59.004213 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-4-10', │ │ 2025-09-18 11:05:59.004224 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.004234 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.004245 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.004255 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8', │ │ 2025-09-18 11:05:59.004266 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.004302 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 11:05:59.004313 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:59.004324 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.004335 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.004346 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8', │ │ 2025-09-18 11:05:59.004357 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8', │ │ 2025-09-18 11:05:59.004367 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.004378 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.004389 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.004400 | orchestrator | │ │ │ │ │ 'name': 'SCS-1V-8-20', │ │ 2025-09-18 11:05:59.004410 | orchestrator | │ │ │ │ │ 'cpus': 1, │ │ 2025-09-18 11:05:59.004421 | orchestrator | │ │ │ │ │ 'ram': 8192, │ │ 2025-09-18 11:05:59.004432 | orchestrator | │ │ │ │ │ 'disk': 20, │ │ 2025-09-18 11:05:59.004443 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.004453 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.004464 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-1V:8:20', │ │ 2025-09-18 11:05:59.004475 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-1V-8-20', │ │ 2025-09-18 11:05:59.004485 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.004507 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.050376 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.050450 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4', │ │ 2025-09-18 11:05:59.050486 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 11:05:59.050498 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:59.050509 | orchestrator | │ │ │ │ │ 'disk': 0, │ │ 2025-09-18 11:05:59.050519 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.050530 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.050540 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4', │ │ 2025-09-18 11:05:59.050551 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4', │ │ 2025-09-18 11:05:59.050562 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.050572 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.050583 | orchestrator | │ │ │ │ { │ │ 2025-09-18 11:05:59.050593 | orchestrator | │ │ │ │ │ 'name': 'SCS-2V-4-10', │ │ 2025-09-18 11:05:59.050604 | orchestrator | │ │ │ │ │ 'cpus': 2, │ │ 2025-09-18 11:05:59.050614 | orchestrator | │ │ │ │ │ 'ram': 4096, │ │ 2025-09-18 11:05:59.050625 | orchestrator | │ │ │ │ │ 'disk': 10, │ │ 2025-09-18 11:05:59.050636 | orchestrator | │ │ │ │ │ 'scs:cpu-type': 'shared-core', │ │ 2025-09-18 11:05:59.050646 | orchestrator | │ │ │ │ │ 'scs:disk0-type': 'network', │ │ 2025-09-18 11:05:59.050657 | orchestrator | │ │ │ │ │ 'scs:name-v1': 'SCS-2V:4:10', │ │ 2025-09-18 11:05:59.050667 | orchestrator | │ │ │ │ │ 'scs:name-v2': 'SCS-2V-4-10', │ │ 2025-09-18 11:05:59.050678 | orchestrator | │ │ │ │ │ 'hw_rng:allowed': 'true' │ │ 2025-09-18 11:05:59.050689 | orchestrator | │ │ │ │ }, │ │ 2025-09-18 11:05:59.050699 | orchestrator | │ │ │ │ ... +19 │ │ 2025-09-18 11:05:59.050710 | orchestrator | │ │ │ ] │ │ 2025-09-18 11:05:59.050720 | orchestrator | │ │ } │ │ 2025-09-18 11:05:59.050731 | orchestrator | │ │ limit_memory = 32 │ │ 2025-09-18 11:05:59.050741 | orchestrator | │ │ recommended = True │ │ 2025-09-18 11:05:59.050752 | orchestrator | │ │ self = │ │ 2025-09-18 11:05:59.050774 | orchestrator | │ ╰──────────────────────────────────────────────────────────────────────────╯ │ 2025-09-18 11:05:59.050788 | orchestrator | ╰──────────────────────────────────────────────────────────────────────────────╯ 2025-09-18 11:05:59.050815 | orchestrator | KeyError: 'recommended' 2025-09-18 11:05:59.637440 | orchestrator | ERROR 2025-09-18 11:05:59.637819 | orchestrator | { 2025-09-18 11:05:59.637886 | orchestrator | "delta": "0:00:06.449842", 2025-09-18 11:05:59.637952 | orchestrator | "end": "2025-09-18 11:05:59.384661", 2025-09-18 11:05:59.637987 | orchestrator | "msg": "non-zero return code", 2025-09-18 11:05:59.638017 | orchestrator | "rc": 1, 2025-09-18 11:05:59.638048 | orchestrator | "start": "2025-09-18 11:05:52.934819" 2025-09-18 11:05:59.638077 | orchestrator | } failure 2025-09-18 11:05:59.651217 | 2025-09-18 11:05:59.651327 | PLAY RECAP 2025-09-18 11:05:59.651390 | orchestrator | ok: 22 changed: 9 unreachable: 0 failed: 1 skipped: 3 rescued: 0 ignored: 0 2025-09-18 11:05:59.651419 | 2025-09-18 11:05:59.880210 | RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-09-18 11:05:59.882147 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-18 11:06:00.653043 | 2025-09-18 11:06:00.653219 | PLAY [Post output play] 2025-09-18 11:06:00.670762 | 2025-09-18 11:06:00.670952 | LOOP [stage-output : Register sources] 2025-09-18 11:06:00.735902 | 2025-09-18 11:06:00.736176 | TASK [stage-output : Check sudo] 2025-09-18 11:06:01.542979 | orchestrator | sudo: a password is required 2025-09-18 11:06:01.771783 | orchestrator | ok: Runtime: 0:00:00.008708 2025-09-18 11:06:01.779551 | 2025-09-18 11:06:01.779782 | LOOP [stage-output : Set source and destination for files and folders] 2025-09-18 11:06:01.812977 | 2025-09-18 11:06:01.813267 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-09-18 11:06:01.894720 | orchestrator | ok 2025-09-18 11:06:01.905468 | 2025-09-18 11:06:01.905721 | LOOP [stage-output : Ensure target folders exist] 2025-09-18 11:06:02.308646 | orchestrator | ok: "docs" 2025-09-18 11:06:02.308987 | 2025-09-18 11:06:02.519361 | orchestrator | ok: "artifacts" 2025-09-18 11:06:02.705011 | orchestrator | ok: "logs" 2025-09-18 11:06:02.719789 | 2025-09-18 11:06:02.719923 | LOOP [stage-output : Copy files and folders to staging folder] 2025-09-18 11:06:02.759385 | 2025-09-18 11:06:02.759672 | TASK [stage-output : Make all log files readable] 2025-09-18 11:06:03.009809 | orchestrator | ok 2025-09-18 11:06:03.018624 | 2025-09-18 11:06:03.018746 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-09-18 11:06:03.043546 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:03.056424 | 2025-09-18 11:06:03.056563 | TASK [stage-output : Discover log files for compression] 2025-09-18 11:06:03.080408 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:03.093509 | 2025-09-18 11:06:03.093654 | LOOP [stage-output : Archive everything from logs] 2025-09-18 11:06:03.137366 | 2025-09-18 11:06:03.137527 | PLAY [Post cleanup play] 2025-09-18 11:06:03.145717 | 2025-09-18 11:06:03.145807 | TASK [Set cloud fact (Zuul deployment)] 2025-09-18 11:06:03.199557 | orchestrator | ok 2025-09-18 11:06:03.210307 | 2025-09-18 11:06:03.210411 | TASK [Set cloud fact (local deployment)] 2025-09-18 11:06:03.243778 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:03.259479 | 2025-09-18 11:06:03.259618 | TASK [Clean the cloud environment] 2025-09-18 11:06:03.770668 | orchestrator | 2025-09-18 11:06:03 - clean up servers 2025-09-18 11:06:04.753044 | orchestrator | 2025-09-18 11:06:04 - testbed-manager 2025-09-18 11:06:04.835863 | orchestrator | 2025-09-18 11:06:04 - testbed-node-3 2025-09-18 11:06:04.922097 | orchestrator | 2025-09-18 11:06:04 - testbed-node-4 2025-09-18 11:06:05.009235 | orchestrator | 2025-09-18 11:06:05 - testbed-node-2 2025-09-18 11:06:05.096963 | orchestrator | 2025-09-18 11:06:05 - testbed-node-0 2025-09-18 11:06:05.185952 | orchestrator | 2025-09-18 11:06:05 - testbed-node-1 2025-09-18 11:06:05.275795 | orchestrator | 2025-09-18 11:06:05 - testbed-node-5 2025-09-18 11:06:05.362795 | orchestrator | 2025-09-18 11:06:05 - clean up keypairs 2025-09-18 11:06:05.382795 | orchestrator | 2025-09-18 11:06:05 - testbed 2025-09-18 11:06:05.405651 | orchestrator | 2025-09-18 11:06:05 - wait for servers to be gone 2025-09-18 11:06:16.242694 | orchestrator | 2025-09-18 11:06:16 - clean up ports 2025-09-18 11:06:16.438375 | orchestrator | 2025-09-18 11:06:16 - 04d127dd-2daa-49e8-b093-89d96b7090ff 2025-09-18 11:06:16.675266 | orchestrator | 2025-09-18 11:06:16 - 0668e52b-b54b-4a4c-b368-f6505d9c05ef 2025-09-18 11:06:16.911732 | orchestrator | 2025-09-18 11:06:16 - 0d33c808-9866-4995-9032-37edcb97dcaf 2025-09-18 11:06:17.111944 | orchestrator | 2025-09-18 11:06:17 - 3085e031-2f3a-4f84-bee2-942544044320 2025-09-18 11:06:17.312631 | orchestrator | 2025-09-18 11:06:17 - 45bc695c-dc46-40d4-9dce-1fab25bbfcc3 2025-09-18 11:06:17.702879 | orchestrator | 2025-09-18 11:06:17 - 508c660a-9ddc-4c86-afbe-91a86d140aa0 2025-09-18 11:06:17.958139 | orchestrator | 2025-09-18 11:06:17 - 7a0c6963-d022-4f68-b3d7-3bbb99eebdd6 2025-09-18 11:06:18.177139 | orchestrator | 2025-09-18 11:06:18 - clean up volumes 2025-09-18 11:06:18.282962 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-manager-base 2025-09-18 11:06:18.319368 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-4-node-base 2025-09-18 11:06:18.358512 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-3-node-base 2025-09-18 11:06:18.399473 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-0-node-base 2025-09-18 11:06:18.438626 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-1-node-base 2025-09-18 11:06:18.476666 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-2-node-base 2025-09-18 11:06:18.515790 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-7-node-4 2025-09-18 11:06:18.558564 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-6-node-3 2025-09-18 11:06:18.598501 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-5-node-base 2025-09-18 11:06:18.638759 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-2-node-5 2025-09-18 11:06:18.676821 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-3-node-3 2025-09-18 11:06:18.716849 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-5-node-5 2025-09-18 11:06:18.755090 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-0-node-3 2025-09-18 11:06:18.794773 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-8-node-5 2025-09-18 11:06:18.836203 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-4-node-4 2025-09-18 11:06:18.874829 | orchestrator | 2025-09-18 11:06:18 - testbed-volume-1-node-4 2025-09-18 11:06:18.918543 | orchestrator | 2025-09-18 11:06:18 - disconnect routers 2025-09-18 11:06:19.056245 | orchestrator | 2025-09-18 11:06:19 - testbed 2025-09-18 11:06:20.018752 | orchestrator | 2025-09-18 11:06:20 - clean up subnets 2025-09-18 11:06:20.580881 | orchestrator | 2025-09-18 11:06:20 - subnet-testbed-management 2025-09-18 11:06:20.754641 | orchestrator | 2025-09-18 11:06:20 - clean up networks 2025-09-18 11:06:20.926462 | orchestrator | 2025-09-18 11:06:20 - net-testbed-management 2025-09-18 11:06:21.224769 | orchestrator | 2025-09-18 11:06:21 - clean up security groups 2025-09-18 11:06:21.268856 | orchestrator | 2025-09-18 11:06:21 - testbed-node 2025-09-18 11:06:21.377159 | orchestrator | 2025-09-18 11:06:21 - testbed-management 2025-09-18 11:06:21.504889 | orchestrator | 2025-09-18 11:06:21 - clean up floating ips 2025-09-18 11:06:21.539100 | orchestrator | 2025-09-18 11:06:21 - 81.163.192.190 2025-09-18 11:06:22.328487 | orchestrator | 2025-09-18 11:06:22 - clean up routers 2025-09-18 11:06:22.433945 | orchestrator | 2025-09-18 11:06:22 - testbed 2025-09-18 11:06:23.321256 | orchestrator | ok: Runtime: 0:00:19.732446 2025-09-18 11:06:23.323753 | 2025-09-18 11:06:23.323856 | PLAY RECAP 2025-09-18 11:06:23.323934 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-09-18 11:06:23.323973 | 2025-09-18 11:06:23.450173 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-09-18 11:06:23.452508 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-18 11:06:24.175267 | 2025-09-18 11:06:24.175420 | PLAY [Cleanup play] 2025-09-18 11:06:24.191065 | 2025-09-18 11:06:24.191191 | TASK [Set cloud fact (Zuul deployment)] 2025-09-18 11:06:24.248885 | orchestrator | ok 2025-09-18 11:06:24.259024 | 2025-09-18 11:06:24.259187 | TASK [Set cloud fact (local deployment)] 2025-09-18 11:06:24.293932 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:24.311336 | 2025-09-18 11:06:24.311490 | TASK [Clean the cloud environment] 2025-09-18 11:06:25.411050 | orchestrator | 2025-09-18 11:06:25 - clean up servers 2025-09-18 11:06:25.874736 | orchestrator | 2025-09-18 11:06:25 - clean up keypairs 2025-09-18 11:06:25.889743 | orchestrator | 2025-09-18 11:06:25 - wait for servers to be gone 2025-09-18 11:06:25.930544 | orchestrator | 2025-09-18 11:06:25 - clean up ports 2025-09-18 11:06:26.011153 | orchestrator | 2025-09-18 11:06:26 - clean up volumes 2025-09-18 11:06:26.070217 | orchestrator | 2025-09-18 11:06:26 - disconnect routers 2025-09-18 11:06:26.090495 | orchestrator | 2025-09-18 11:06:26 - clean up subnets 2025-09-18 11:06:26.111433 | orchestrator | 2025-09-18 11:06:26 - clean up networks 2025-09-18 11:06:26.261326 | orchestrator | 2025-09-18 11:06:26 - clean up security groups 2025-09-18 11:06:26.294571 | orchestrator | 2025-09-18 11:06:26 - clean up floating ips 2025-09-18 11:06:26.318489 | orchestrator | 2025-09-18 11:06:26 - clean up routers 2025-09-18 11:06:26.859632 | orchestrator | ok: Runtime: 0:00:01.268050 2025-09-18 11:06:26.863690 | 2025-09-18 11:06:26.863864 | PLAY RECAP 2025-09-18 11:06:26.863986 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-09-18 11:06:26.864047 | 2025-09-18 11:06:26.982815 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-09-18 11:06:26.983830 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-18 11:06:27.694352 | 2025-09-18 11:06:27.694501 | PLAY [Base post-fetch] 2025-09-18 11:06:27.709355 | 2025-09-18 11:06:27.709471 | TASK [fetch-output : Set log path for multiple nodes] 2025-09-18 11:06:27.764981 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:27.778791 | 2025-09-18 11:06:27.778993 | TASK [fetch-output : Set log path for single node] 2025-09-18 11:06:27.826360 | orchestrator | ok 2025-09-18 11:06:27.835435 | 2025-09-18 11:06:27.835569 | LOOP [fetch-output : Ensure local output dirs] 2025-09-18 11:06:28.312871 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/logs" 2025-09-18 11:06:28.583224 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/artifacts" 2025-09-18 11:06:28.848887 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/4abb468dbef14e5b8b9021c6a1c4ab57/work/docs" 2025-09-18 11:06:28.864035 | 2025-09-18 11:06:28.864159 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-09-18 11:06:29.769355 | orchestrator | changed: .d..t...... ./ 2025-09-18 11:06:29.769653 | orchestrator | changed: All items complete 2025-09-18 11:06:29.769746 | 2025-09-18 11:06:30.497641 | orchestrator | changed: .d..t...... ./ 2025-09-18 11:06:31.204784 | orchestrator | changed: .d..t...... ./ 2025-09-18 11:06:31.229718 | 2025-09-18 11:06:31.229854 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-09-18 11:06:31.266808 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:31.269888 | orchestrator | skipping: Conditional result was False 2025-09-18 11:06:31.292881 | 2025-09-18 11:06:31.293066 | PLAY RECAP 2025-09-18 11:06:31.293150 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-09-18 11:06:31.293192 | 2025-09-18 11:06:31.407744 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-09-18 11:06:31.408713 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-18 11:06:32.115753 | 2025-09-18 11:06:32.115917 | PLAY [Base post] 2025-09-18 11:06:32.130339 | 2025-09-18 11:06:32.130466 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-09-18 11:06:33.305073 | orchestrator | changed 2025-09-18 11:06:33.314586 | 2025-09-18 11:06:33.314720 | PLAY RECAP 2025-09-18 11:06:33.314800 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-09-18 11:06:33.314907 | 2025-09-18 11:06:33.425137 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-09-18 11:06:33.427785 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-09-18 11:06:34.219305 | 2025-09-18 11:06:34.219476 | PLAY [Base post-logs] 2025-09-18 11:06:34.230809 | 2025-09-18 11:06:34.230980 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-09-18 11:06:34.683002 | localhost | changed 2025-09-18 11:06:34.696375 | 2025-09-18 11:06:34.696532 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-09-18 11:06:34.734821 | localhost | ok 2025-09-18 11:06:34.741043 | 2025-09-18 11:06:34.741192 | TASK [Set zuul-log-path fact] 2025-09-18 11:06:34.769822 | localhost | ok 2025-09-18 11:06:34.786581 | 2025-09-18 11:06:34.786771 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-09-18 11:06:34.824508 | localhost | ok 2025-09-18 11:06:34.831606 | 2025-09-18 11:06:34.831892 | TASK [upload-logs : Create log directories] 2025-09-18 11:06:35.321384 | localhost | changed 2025-09-18 11:06:35.326923 | 2025-09-18 11:06:35.327091 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-09-18 11:06:35.809174 | localhost -> localhost | ok: Runtime: 0:00:00.006918 2025-09-18 11:06:35.818455 | 2025-09-18 11:06:35.818734 | TASK [upload-logs : Upload logs to log server] 2025-09-18 11:06:36.374979 | localhost | Output suppressed because no_log was given 2025-09-18 11:06:36.379170 | 2025-09-18 11:06:36.379386 | LOOP [upload-logs : Compress console log and json output] 2025-09-18 11:06:36.436960 | localhost | skipping: Conditional result was False 2025-09-18 11:06:36.442041 | localhost | skipping: Conditional result was False 2025-09-18 11:06:36.453549 | 2025-09-18 11:06:36.453859 | LOOP [upload-logs : Upload compressed console log and json output] 2025-09-18 11:06:36.501569 | localhost | skipping: Conditional result was False 2025-09-18 11:06:36.502078 | 2025-09-18 11:06:36.505706 | localhost | skipping: Conditional result was False 2025-09-18 11:06:36.519258 | 2025-09-18 11:06:36.519511 | LOOP [upload-logs : Upload console log and json output]